Current Benchmarking Trends In Financial Risk Management

Robust financial risk management requires three things to get aligned: management drive, data availability and expertise in the team. All these are in order of priority.

Post 2008 regulators have ensured that there is strong management drive towards establishment risk management framework. But the second priority: data availability is still a challenge. The reason being: data recording processes were not streamlined in the past. There was not perfect audit or validation of these processes. Hence there is limited data and that too may be highly unreliable.

Earlier models developed by the risk managers were validated and approved by the business personals based entirely on the final output. If the output risk numbers were aligned with their stories then the model was fine or else model was no good. Never any business personal has ever challenged the model based on its internal methodology.

Data and process problem is also because of continuous acquisitions, mergers of businesses, organizations as well as change in platforms. Regular churning of the portfolio based on at the moment business and economic scenarios also compounds this challenge.

Financial institutes now have dedicated team of model validators whose job is to scrutinize model development from wing to wing basis, ie: understanding business requirement, data collection, data cleansing, model development, model implementation, model documentation and model monitoring processes.

The model developers and model validators do not lack any expertise in their business understanding and skill around transferring that understanding into right models, but they are challenged by lack of data and its unreliability. This possess serious issues in the risk models because of the principal “garbage in and garbage out”. This is an issue even in many of the fortune 500 organizations.

To overcome these issues and ensure robust risk management, a bench-marking department is critical. Key role of this team will be:

1) Develop independent benchmark modeling protocols including: replication, challengers and suggesting alternative industry standards.

2) Develop and implement strategic testing tools and methodologies including scenario/sample selection, test data bases and developed standardized testing tools. Including this they should be responsible for robust overly/buffer analysis. So that it could be decided how much over capitalization should be done over the capital suggested by the model.

To do perform their role efficiently, the team will have to find solutions at two levels: data level and model level. We now suggest some approaches to find solutions in these two levels.

Data level solution approach

Main challenges within the data are limited data, gaps in data and non-relevance of data due to change in business and economy. In these cases, synthetic data should be generated which is aligned as per the business process and problem in hand. The methodologies should be validated and detailed documentation should be performed explaining the nitty-gritties of assumptions.

Second set of challenges around data is usage of forecasted data. Many of the processes also use forecasted external macro-economic data. These forecasted date series is generated by externally recognized agencies. When carefully observed, these forecasted series tend flatten out after some time. These series do not take account any future volatility into account. It is natural for the forecasting agencies to take this approach because nobody can foresee any future event.

This can be a source of error because the forecasting assumptions used by agencies might not be relevant for our business problem in hand. This creates a bias in forecasted data. This can be a serious issue in valuations and PPNR (Pre-Provision Net Revenue) models. A process map for minimizing these biases needs to be created which will ensure robust risk management.

Model level solution approach

Majorly financial risk models address 5 cases: Stress testing, Value at Risk (VaR), Allowances for Loan and Lease Losses (ALLL), Pricing and Valuations. For completeness we will explain them briefly.

ALLL: ALLL are modeled for the purpose of reserve allocation. Reserves are allocated for time period year or less than year. Reserve is allowances which are required by the business to cover the losses which occur under normal and expected lines of business process.

Stress testing is checking of robustness of a given financial institution/ portfolio to deal with an economic crisis. Stress scenarios of independent variables specific to the portfolio are constructed and values of dependent variable is calculated using the modeled relationship.

VaR is a statistically calculated number which tells us the measure of un-expected loss within a firm or an investment portfolio over a specific time frame, under a given confidence. It is measured in terms of price units (or as a percentage of the portfolio’s total value), and is applicable to any priced asset.

Before discussing valuation and pricing let’s discuss briefly the nuances between price and value. Value is driven by demand from utility perspective. It is more of a long term concept. Price is set by match of demand and supply. If supply is unlimited no matter what the value of a commodity is, its price will go down. Demand and supply also depend on overall market trends, news- good or bad, confidence or lack of it in the economy. Price generally is a short term concept.

Valuations: Valuation of a portfolio is discounted value of the future cash flows. Value “quite a lot” depends on utility, and utility is most of the time stable hence value is stable. It is assumed that whatever the relationship portfolio holds with the external factors today will hold good in down the line in future.

Pricing: Price as discussed depends on current demand and supply, which is very much dependent on the current circumstances.

Now we will discuss major models/methods used to address the above 5 cases. All the methods (and their combinations) discussed below can be used to any of the cases depending on the business problem and data.

Regression based models constitute majority of the risk measurement models. In these models external/internal variables are used to project the risk variable of interest. This is a preferred methodology for stress testing and valuations.

A benchmarking tool can be developed that has a collection of all possible external macro-economic variables as well as internal variables. Using this data structure this tool will give best possible relationship between variable of interest and these variables.

Second types of models which are most common are TPM (Transition Probability Matrix) based methodologies. The objective is to find an optimized TPM using heuristics techniques based on available business problem and data. A tool based on various optimization routines will help in finding the best methodology.

Third types of methodologies are simulation based methodologies. Problems related to pricing and VaR are handled by these methodologies. Base equation of these methods is composed of deterministic part and stochastic part. Deterministic part are generally regression based and same type of benchmarking can be used which is used in regression models. For stochastic part a dynamic back testing tool should be developed.

Final category of models is specialized models. They are based on decision trees, clustering techniques, fuzzy logic or Artificial Intelligence/Neural Networks. These are problem specific models. There is no limit to the alternative methodologies to these models and interestingly there is no guarantee that best methodology may be optimal. Hence case by case treatment is required.

Challenges for the Bench marking team

There are various organizational and conceptual challenges which every bench marking team will face within their organization. They will be:

1) It will be a challenge to make other business units understand the idea and concept of bench marking. Business owners understand their data best. To convince that any synthetic data generated by the bench marking team will represent their data will be a huge challenge.

2) Bench marking tools are themselves models. Authenticating/Validating them will be extremely important to have faith in them. Ideally they should be validated by external agencies of high reputation.

3) A big organizational and administrative challenge will be coherent relationship between the model validation team and bench marking team.

Parting thoughts

Even though there is strong regulatory and management drive but limitations in availability of reliable data constraints establishment of efficient risk management processes. This raises the importance bench marking. Bench marking is not limited by external data but can be performed with similar businesses in the organization. When data is not reliable bench marking team can help in giving alternate sources/methodologies to support the business with risk numbers and when it is reliable they help in authenticating it. Hence this team’s role is to comfort the business if there are gaps and when there are none give confidence by authenticating them.

This article has contributions from Sixho Akkaya


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s