Model Monitoring: How to Manage the Risks Not Captured in VaR

The value-at-risk (VaR) framework is now an industry standard for measuring the risk of a financial portfolio. It enables banks to allocate risk capital after comparing the profitability and the risk of different business units.

However, while VaR has found favor in part because it is easy to understand, it is a simplification of the real world. The main behavior of the real world is captured, but some factors are discounted, either because they would make the model too complex or because they’ve historically never played an important role.

Simply put, VaR is one number that provides a rough idea about the extent of risk in a portfolio on a given time horizon. It is measured in terms of price units (or as a percentage of the portfolio’s total value), and is applicable to any priced asset.

The financial crisis of 2008 proved that some previously ignored data behaviors – e.g., a tenor adjustment (a premium expected by a party that pays coupons at a higher frequency than its counterparty) on a basis swap – can suddenly and significantly contribute to a catastrophic event. Before 2008, in many VaR models, tenor adjustments were ignored – but this proved to be a significant mistake.

When two parties enter into a swap agreement where one party makes a monthly payment and the other pays quarterly, a tenor adjustment occurs. The party that pays monthly and receives quarterly will face credit risk. Consequently, that party demands a premium in its quarterly received coupons.

Prior to 2008, the risk of a basis swap was quite minimal. Indeed, primarily because of liquidity concerns, spreads in one-month and three-month swaps in the US were just .25 basis points (bps). Basis swap risk was therefore mostly ignored in VaR models.

Figure 1: The 2008 Financial Crisis – A Spike in Spreads

bloomberg_chart

Source: Bloomberg

However, in the wake of the collapses of Bear Stearns and Lehman Brothers (which significantly spiked the counterparty credit risk concerns and fears of financial institutions), spreads for the aforementioned swaps skyrocketed all the way to 41 bps during the peak of the 2008 crisis. VaR models did not capture this spike in risk.

The Model Monitoring Idea

VaR is actually a model within a model. The first model simulates market scenarios; the second model prices the underlying portfolio for each simulated scenario.

In the first model, simplifications are made in simulating future behavior of market scenarios. In the second model, simplifications are made in valuation exercises aimed at getting results faster. For example, in an interest rate risk calculation, a change in valuation of bonds is executed using a duration convexity approach; for an options portfolio, on the other hand, Greeks are used.

Mathematically, these approximations can only be used when there is minute change in the risk factors. However, there is no clear quantification of this minute change. In VaR, modeling this factor is exploited to get results faster, which can give rise to uncaptured risks.

To capture some of the risks that a VaR model does not, we propose the development of a model monitoring framework that can run a deeper analysis of risk numbers. Building such a framework requires several steps. The first step is to analyze the data that a model must use.

Within a VaR model, every bit of data has its own distribution, which is modeled per the underlying economics. For example, the basic rules of economics state that “stock prices cannot fall below zero” and that “interest rates mean revert.” In addition to weighing such rules too heavily, VaR modeling also puts too much stock into other trendy behaviors in economics – such as the fact that mean reversion is also observed in many foreign exchange instruments and commodities, and that interest rate parity (pull to par of bond prices) can be found among liquid currencies.

Market variables/values of instruments will not strictly follow this behavior at an expected rate or time, especially during stress periods. An effective risk model should record the behavior of these variables and report deviations. Moreover, it should also use historical data to perform comparative analyses and to trigger alerts when any breaches occur.

Performing an attribution walk is the next step in the development of an effective model monitoring framework. This is a process that analyzes the change in the VaR from two consecutive outputs (for, e.g. day over day or quarter over quarter VaR changes) by demonstrating quantitatively the impact of sequential change of each variable.

Figure 2: An Attribution Walk

chart

An attribution walk gives the business a tool for diving deep into model sensitivity for capital calculations, enabling cross-checking of unexpected change in numbers.

Stress Testing and back testing are two additional steps that must take place to build a successful model monitoring framework.

Since every firm faces risk factors that are unique to its industry and business environment, VaR models must be portfolio specific. A variable might impact two firms differently, depending on, for example, the portfolio composition, the product line and the customer profile. What’s more, each model is differently impacted by changes in variables.

Developing firm-specific stress testing scenarios can be achieved with the help of an attribution walk. The risk manager can analyze which variables to shock and at what level. The trick is finding the right balance: too high a shock might mean too unrealistic scenario, while too low a shock would not offer robust stress testing.

Back tests enable a firm to construct an observable pattern of model behavior vis-à-vis market conditions. Typically, back testing is performed for some fixed number of days, such as the last 100 or200 or 500 days. However, to construct a truly observable pattern of model behavior vis-à-vis the market conditions, a firm really needs to perform back tests for the last 100 and 200 and 500 days.

Figure 3: A Model Monitoring Framework to Capture the Risks Not Found in VaR

chart

Through the aforementioned steps, we can generate reports about data, model sensitivity, stress testing and back testing. By analyzing historical data and by finding patterns, these reports should provide a complete picture of a model’s limitations, including the risks it cannot capture.

Analysis of historical data to find a pattern is a data-intensive exercise that is beyond any team of risk analysts. In this type of analysis, heuristic techniques can help with finding any pattern from the reports generated.

Sequentially, these techniques should take into account market data, the financial instrument and the portfolio. After these variables are considered, other factors (like historical risk reports of back testing) should be analyzed to find patterns or any specific data behavior.

When attempting to develop an effective model monitoring framework that can analyze past data and risk reports, other methodologies – such as a Bayesian approach and/or neural networks – are also applicable.

Neural networks are efficient in recognizing patterns, but their internal configuration is unintuitive. A Bayesian approach is intuitive, but its good performance is dependent on complex analytical modeling. Both methodologies require heavy computations, but, given the advances in parallel computations, this should not be a roadblock.

Parting Thoughts

Data is a natural phenomenon, but models are man-made simplifications of that phenomenon. Consequently, models cannot capture all available data, and cannot fully predict future behavior. What’s more, this problem worsens in a limited and unreliable data environment.

VaR complicate matters further because it’s actually a model within a model. Truthfully, risk modeling would not be manageable or sustainable without certain simplifications of the real world, such as those found in VaR. However, while the risks that VaR fails to capture (due to oversimplifications) remain dormant in stable times, they can arise suddenly and pose a serious danger when unexpected events occur.

In this article, we have presented a model monitoring framework that offers directions for diving deep into all aspects of risk measurement process. The exercises we have suggested will keep risk managers informed about model output.

The strength of our framework is that we use structured data to generate the output of each activity. Our analysis is based on heuristic techniques that can be easily implemented to extract knowledge from data.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s