Model risk management is one the most important aspect of financial risk management. As per SR 11 7, Model risk occurs primarily for two reasons:
- The model may have fundamental errors and may produce inaccurate outputs when viewed against the design objective and intended business uses.
- The model may be used incorrectly or inappropriately.
Robust financial risk management requires three things to get aligned: management drive, data availability and expertise in the team. All these are in order of priority.
Post 2008 regulators have ensured that there is strong management drive towards establishment risk management framework. But the second priority: data availability is still a challenge. The reason being: data recording processes were not streamlined in the past. There was not perfect audit or validation of these processes. Hence there is limited data and that too may be highly unreliable.
Data and process problem is also because of continuous acquisitions, mergers of businesses, organizations as well as change in platforms. Regular churning of the portfolio based on at the moment business and economic scenarios also compounds this challenge.
Both SR11 7 reasons at superficial level may point out the incompetence of the business unit and the model developers. Given any model it is trivial to list out its limitations and create a finding and hence assign a fail grading to them. Data limitations force model developers develop models which even they agree fall into the above to criteria of model risk.
For this limitations model developers should focus on the mitigation plans rather remediation. Lets first understand remediation and mitigation within the context of model risk. Remediation means addressing the conceptual concerns about the model and trying to correct the model which satisfy the above guidelines.
Mitigation includes admitting weakness of the model with respect to the guidelines, but due to data limitations expressing inability to address those issues. In this scenario first step business units and model developers should take is demonstrate the clear understanding of the risk what they are in.
Post demonstration model developers can choose a two prong approach:
- Benchmarking the methodology:In case there is limitations in data, any implemented methodology should be benchmarked by an alternate source of data of similar risk. For example when developing credit risk model based on Transition Probability Matrix, if we have limited internal data, we should benchmark the methodology using external data, say Moody’s data assuming that data is adequately available. If the business problem which we want to solve using the methodology is effectively solved on Moody’s data at least we can find comfort that methodology is sound.
- Bootstrapping the data: When data is limited, the developed model should be redeveloped multiple times from bootstrapped data. This will give the range of variations of the model parameters.
These are just some of the possible approaches. There can be multiple approaches depending on the business problem in hand.
Data is a natural phenomenon, but models are man-made. Consequently, there are no models that can completely capture all available data and predict all future behavior. This problem only gets worse when there is limited and unreliable data. There are a limited number of options available from a modeling perspective to deal with thin data sets. Consequently, modelers and business experts have to analyze business problems from several alternative perspectives to achieve the best result. I attempted to propose couple of approaches.