Validation plan of FX Vanilla Option Pricing models

Long time back I validated FX Vanilla Option Pricing model. I would like to present some suggestions on it’s validation.

The validation plan of FX vanilla option pricing model can be divided into following parts:

  1. Data validation
  2. Conceptual soundness
  3. Outcomes analysis
  4. Ongoing monitoring

Data validation will involve:

  1. Check of data consistency between current source, risk systems, front office pricing systems.
  2. Checking if the right interest rate curves, volatility surfaces are used. It was once observed that because of lack of volatility surface information, the model owners used the surfaces of an alternate currency. If such decision has been made, then then the rationale should be document.
  3. Check of FX spot rates and forward rates.

Here we will have to choose a set of currency pairs on which validation should primarily focus, these choices should include:

  1. Currencies pairs which are adequately liquid
  2. Currency pairs that are major part of the portfolio
  3. Currency pairs which have currencies whose exchange rates are available only with indirect quotes
  4. There should be at least one currency pair that is not adequately liquid, so that the illiquidity impact may be studied.

Conceptual Soundness can be checked by the following steps:

  1. The first step should be the external replication of all the following:
    1. Pricing of the options
    2. Calculation of the sensitivities
    3. Calculation of the volatility surfaces
    4. Construction of the interest rate curves and interpolation methodologies
    5. If the risk system has to be validated then the VaR calculations must be replicated.
  2. The external replication should be compared with all possible borderline cases of all the variables and their possible combinations.
  3. The second part should be exploring the possibilities of benchmarking the currently used methodologies for pricing, interpolation, volatility curve constructions by
    1. External sources: Usage of those sources should be justified by the validator.
    2. Alternate methodologies: Again, same as above
    3. The results should be used either to challenge the model output or support the model output.
  4. If Garman–Kohlhagen pricing formula is used then the data assumptions should be checked by studying the historical data.
    1. This formula assumes that the spot rates are lognormally distributed, this should be checked by studying the historical data.
    2. The consistency and inconsistencies of interest rate party using historical data should be checked.

Outcomes Analysis: A report should be developed where it should be discussed that which currency pairs demonstrate consistencies in above the properties mentioned.

Ongoing Monitoring

The ongoing monitoring report should document the periodic model performance compared with the external data and realized trades. The monitoring report should also document the data deviations and their impact.

 

Dear reader, first of all thank you for visiting this page! In case you think that the validation can be improved then please leave a suggestion, I will include your suggestion with acknowledgment.

Advertisements

FX Call Option = Put Option from Counterparty’s Perspective

FX call option is an option to buy a foreign currency at a particular domestic currency price. If we look from the counterparty’s perspective it is also a put option to sell the domestic currency at that price of foreign currency.

Let us take an example of USD/EUR currency option. Suppose the price of 1 unit of Euro is X and strike price is K and c is the price of call option in USD.

On the other side, the price of the USD in Euro is 1/X and the strike will be 1/K to buy 1 unit of USD in Euro. Let the price of this put option will be p_f.

The options c and p_f are similar in nature and hence their price should be identical. But if you calculate it, it won’t be identical. You would be required to make exchange rate adjustments and notional assumptions. 

The p_f will have to be multiplied by X to convert the option price in domestic currency.

The resultant value will have to be further multiplied by K because the notional amount in the call is in Euro.

The resultant price will be equal to c.

Summarizing c = p_f * X *K

What senior investment leaders expect from their risk analysts

Senior investment managers and leaders have historically very good understanding of the market. They are the ones who understand the data intuitively.

With time the world has become more integrated and businesses have diversified. This has resulted in diversified data. Even though the markets are getting complicated like never before the senior investment professionals still are the very best in understanding. They still understand their data. The challenge they find is deriving knowledge from that data because current market data if seen individually can be easily understood by them, but when there are many data points, it becomes overwhelming to even the seasoned professional to derive knowledge from it.

From my experience in various projects with such professionals, I came to know that when they come to the risk analysts with data and problem, they are generally not looking for any sophisticated statistical model. They  are looking for a user interface tool kit which simplifies their data mining process and hence extract the knowledge from the data easily.

I would like to share some of my personal project experience.

Tool box for a hedge fund manager

A hedge fund manager had a portfolio of revolver loans. He had to sell that portfolio to his client. In order to make him the sale, the manager has to show his client what can be the possible worst loss the portfolio can give.

The data he had was history of last 10 years of each of his loan customer and what was the outstanding loan that customer had. The customers who breached the limit 80% withdrawal from the allocated amount were considered risky. He came to our team with a request to develop a tool which could mine knowledge from his shared data by performing historical simulation of random sub sample. We offered him to perform  additional statistical analysis but he was not interested.

So we developed a tool which could perform multiple sample size simulation as per his choice he wanted to demonstrate his client that among all the customers:

  1. On a given year, what percentage of the customers were in the risky territory.
  2. Which sectors were the most risky?
  3. How much dollar value of the portfolio of each segment was at risk?
  4. What is the average out standing to loan for a random sample?
  5. What is the standard deviation of the outstanding to loan for the chosen sample?

Our tool gave him freedom to choose multiple sample at a time, his choice of time period. All the visualizations were in graphical and simple tabular format. By being empowered with his own data he was able to make the sale!

Expected Loss calculator for a Loan Portfolio Manager

A senior loan portfolio manager managed multi-billion dollar loan portfolio. He had exposure in various segments.

He identified a set of risk factors which impacted the exposure of his portfolio. He request us to develop a tool box using which he could measure the change in exposure.

We developed an excel based tool where he could vary the risk factors and check the change in portfolio valuation sector wise.

PCA component analysis for an Interest Rate Strategist

Principal Component Analysis is perhaps the most effective tool in dealing with highly correlated data. Interest rate curve data is one of the best example of such data. Their is high auto-correlation within a historical series as well has very high co-linearity within the tenors. Hence PCA is favorite statistical tool for interest rate strategists.

Many years ago, even though excel was favorite tool for business professionals, excel based tools were not easily found over the internet. An interest rate strategist came to us with a request to develop an excel based tool which calculated PCA for any historical series of currency of his choice.

We offered him to include additional sophisticated statistical models to help him in his analysis. He was only interested PCA analysis. After further discussion, we came to know that he relied on his own judgment about the data and required a tool which could do PCA and addition could give very good visualization of the results.

Conclusion

Often investment managers and leaders reach out to risk analysts with their business problems associated with data mining. These people are in the market have developed their expertise in its understanding. The problems they come with are not generally within regulatory domain but are about data mining.

When the business problems are not within regulatory  domain, analysts instead of providing them additional statistical analysis from what required should instead focus in empowering them better visualization of their requested analysis. They should develop the requested tools which give them more authority in modifying their analysis with various perspectives.

P&L attribution challenge in FRTB compliance

FRTB’s P&L attribution test requirements are based on two metrics:

  1. Mean unexplained daily P&L (ie: risk theoretical P&L minus hypothetical P&L) over the standard deviation of hypothetical daily P&Lratio1

N is number of trading days in the month. Ratio1 has to be between -10% to +10%.

  1. The ratio of variances of unexplained daily P&L and hypothetical daily P&L.

rati2

Ratio2 has to be less than 20%.

These ratios are calculated monthly and reported prior to the end of the following month. If the first ratio is outside of the range of -10% to +10% or if the second ratio were in excess of 20% then the desk experiences a breach. If the desk experiences four or more breaches within the prior 12 months then it must be capitalized under the standardized approach. The desk must remain on the standardized approach until it can pass the monthly P&L attribution requirement and provided it has satisfied its backtesting exceptions requirements over the prior 12 months.

The real challenge:

Let’ look at variance(P&LRisk – P&LHypo)

Volatility of P&LRisk  = σ(P&LRisk)

Volatility of P&LHypo  = σ(P&LHypo)

Then variance(P&LRisk – P&LHypo) = σ2(P&LHypo) + σ2(P&LRisk) – 2×ρ × σ(P&LRisk) × σ(P&LHypo)

For simplicity let us assume that σ(P&LRisk) = σ(P&LHypo). Then

ratio3

As per FRTB guidelines,  (2-2 ×ρ) < 20%

This implies ρ>90%

The following article explains these challenges from the practical perspective:

FRTB Compliance – Implementation Challenges http://www.garp.org/#!/risk-intelligence/culture-governance/compliance/a1Z40000003PBRnEAO/frtb-compliance-implementation-challenges

 

Risk management systems and front office systems

Often validators have to validate derivative pricing models. Banks generally have two systems: one for risk management and other front office. It is advisable that both the systems should be exactly similar but practically this is practice is not common. Due to the impact front office system play in trading, front office systems are more sophisticated. This makes them expensive. So for that reasons often banks use cheaper or home made systems for their risk management requirements.

Front office engine’s pricing model’s calculations tend to be more accurate due to their impact in businesses’s PnL directly. Pricing models in risk systems are simplified because in risk calculation exsercises like VaR/PFE calculations a derivative has to be priced multiple times, so there is a time constraints.

Difference in observed in derivative pricing models in front and risk management system from input data perspective and model perspective.

Input data perspective:

  1. Front office systems uses sophisticated techniques in interpolation of yield curve where as risk management systems get away by simple linear interpolation.
  2. Options based products use volatility smile (simple equity options), volatility surface  (FX options, caps/floors), volatility cubes (swaptions).
    1. Data in front office is generally has more data points in smiles, surfaces or cubes than in comparison with to risk management systems.
    2. Interpolation follow the same rule as discussed above.
  3. Risk models require historical data to calculate VaR/PFE. Often for illiquid currencies as well as for exotic derivatives historical data is not adequate, in these scenarios alternate data is used. For example if volatility of a particular currency is not available then volatility of an alternative currency is used as a proxy.

The challenge multiplies when there are multiple currencies in the portfolio. The issue of data triangulation multiplies because often in risk management, same simplifications are applied to each currency which may not be valid for every currency. Often risk managers prioritize their data based on their portfolio concentration.

Model perspective:

For exotic derivatives front office pricing models use sophisticated techniques like Monte Carlo simulations where as risk models use approximate techniques. These approximate techniques are often analytical approximations of derivative pricers where analytical solution is not tractable. For example: To price an American put option front office uses Monte Carlo simulations but it is generally observed that the risk models use approximated analytical formulas.

Due to such reasons there are often differences in pricing of front office and risk systems. Validators often use one system to benchmark another and hence use them as a leverage in validation exercise. Even though the pricing may not match but sensitivities should match.

Need for FRTB

I attended an FRTB conference on last Thursday/Friday.

There a speaker gave a very nice and interesting overview about why FRTB on the first place. Listening to him was a treat to ears!

I was able to capture some notes from his presentation.

During pre-crisis era

  1. Rules between trading book and banking boundary were imprecise and regulators could not stop the regulatory arbitrage that was happening in the banking sector by transferring the assets between banking and trading book and vice versa.
  2. There was no desk wise visibility for the regulators. Regulators had right to look into the details within the enterprise level but because there were no set procedures around it, so regulators seldom looked into details.
    1. Because of this reason regulators were not able to question the desks which were carrying out complex trades under seniority tranches.
  3. There was no regulatory tool which they could use to stop the model usage. Regulators could question the model usage but had to deal each model individually. In the due process many risks remained uncovered.
  4. There was not linkage between standardized approach and model based approach. There was no logical or intuitive relationship between the capitals calculated between these two approaches.
  5. Degree of complexity was not punished in a way regulators deemed fit, rating migrations/default risk were not captured.
  6. With VaR approach regulators and risk managers were looking at a point where the tail began rather than what was within the tail.
  7. 2008 crisis proved that the way liquidity was measured was inadequate.

After the crisis

Regulators came up with Basel 2.5/3 which was a quick patch up activity to address some of the above challenges

  1. Rating migration risk was captured by including CVA in capital calculations.
  2. Liquidity was captured by introducing the multiplication factor 3.
  3. Tail risk was captured by calculating the VaR in stressed market conditions.
  4. A complex calculation was introduced to capture incremental risk charges.
  5. Comprehensive risk measures were applied to correlation trading books.

FRTB rules were decided in a course of almost 5 years. They address all the above issues and also include

  1. The need to address data quality issues by including non modelable risk factors.
  2. Comparability of Standardized Approach and Internal Model Approach by making Standardized Approach risk sensitive.
  3. Mandatory PnL attribution hence triangulation between front office data and risk management data.

 

I will update this page as I learn and understand FRTB more.

Model Risk Management under FRTB regime

Usually in previous regulatory guidelines standardized approach was not meant to be a model. There was not linkage between standardized approach and internal model based approach.

The regulators were able to compare the capitals under standardized approach between two banks because rules were same for all the banks but for risk managers there was no logical or intuitive relationship between the capitals calculated between standardized and internal model based approach for a given portfolio.

Making Standardized approach risk sensitive is the a path breaking change made by regulators.

In FRTB standardized approach’s risk sensitivities come from the derivative pricing models. These pricing models have uncertain outcomes. Capital has to be calculated at the desk level. So each desk’s model has to go through the normal model risk management process framework where it will be checked for conceptual soundness, outcomes analysis and ongoing monitoring.

Currently (Basel 3 and before) when capital was allocated for specific risk charges, risk managers generally went for lookup tables and apply some formulas. This is no more valid in FRTB. FRTB has introduced the concept of non modellable risk factors. Any risk factor which has less than 24 data points will attract Non Modellable Risk Charge (NMRC).

FRTB also acknowledges that under various situations (specially exotic option pricing) the prescribed delta, gamma and vega approach do not work well. Modelers and subsequently validators need to be cognizant about this issue and ensure that model development framework addresses those issues for those situations.

Under FRTB portfolio level capital has to be developed under three correlation scenarios. At the desk level the scenario with maximum capital value is to be chosen. The calculation approach used to derive capital will come under the purview of model risk.

Default risk charges uses various inputs which are not always straight forward and deterministic. The choices made by the first line of defense would require validation by the second line of defense.

In internal model based approach, Expected Shortfall is calculated.The model is stress calibrated that is: calibrated in stressed scenarios. When this calculation is performed not all the risk factors are used for calibration. The first line of defense has to decide which factors they are using and which they are not using. They then have to justify their choice. The second line of defense will have to review and critique on their choices.

Same approach has to be used for add-ons due to non-modellable risk factors. Based on the exceptions the multipliers have to be decided. Similarly for PnL attribution the lines of defense need to decide the approach and that approach will be reviewed by second line of defense.

Huge opportunities in Ongoing Monitoring

By insisting capital allocation at desk level regulators want banks to keep detailed tab on each and every trading activity. Even if a desk qualifies for Internal Model Based approach, it is mandatory to calculate capital with standardized approach.

This will result in massive amount of data being generated which would be required to be monitored. Banks will require sophisticated model monitoring frameworks which would keep an eye on outliers, do pattern analysis of each data sequence generated, so that timely warning signals would be raised.

To handle this amount of data banks will require sophisticated technological framework which would use advanced statistical tools to perform pattern analysis of the risk data generated.

The triggers would not only be pattern dependent but also a combination of risk measures. Lets understand it with an example. Suppose a time sequence of (gamma OR vega) is showing some out-liers. The monitoring system should raise a trigger but it may not be critical. But suppose this pattern is observed in (gamma AND vega) sequence then it may really a case where risk managers should be cautioned. This will give rise the research on a new field of financial risk management.

Banks may choose to develop such frameworks in-house or may choose to employ vendor models.

From the vendor perspectives there will be huge competition among vendors where each vendor will claim that his model monitoring framework is better because better and advanced techniques used with it. Hence there would lie further issues of disclosure of methodologies and techniques by the vendors. No vendor would like to disclose his methodology of model monitoring framework fearing to be copied by the competitor.

Concluding thoughts

From market risk management framework it appears that regulators have made a giant leap to understand and manage risk but because of the data intensiveness of the suggested approach and new run from ongoing monitoring will start.

 

There is also a publication: Model Risk Management under the FRTB Regime

Link: http://www.garp.org/#!/risk-intelligence/detail/a1Z40000003LViKEAW/model-risk-management-under-frtb-regime

 

I will update this page as I learn and understand FRTB more.