Interest Rate Sampling Algorithm

In the year 2007 when the computation power was limited, I developed the below algorithm. I doubt that today it is relevant because now we have huge computation power also not sure if it would pass model validation scrutiny.

Business problem:

The interest rate risk department wanted to calculate the interest rate risk associated one year down the line, with the surplus (assets minus liabilities) of various product lines.

Our previous methodology

1) We used some interest rate scenario and generated 300,000 scenarios, one year down the line, the input being the present day yield curve.

2) We then used duration convexity methodology to calculate the change in the Market value of the surplus.

3) Hence we have 300,000 values at risk, hence we calculated the VaR 95, 99, 99.9 and 99.99.

Problems

It is now trivial to know that the number 300,000 is a very huge number and is a big bottle neck if we carry out some rigorous scenario analysis. Hence there arose a need of some sampling methodology.

For this I tried to experiment with a research paper. The research paper was as follows:

 http://www.cwu.edu/~chueh/naaj0207_8.pdf

But I found that the pivoting algorithm’s time complexity was a big bottleneck. So I designed my own pivoting algorithm. I explain that algorithm.

         First S1 scenarios were chosen randomly from the collection of S2 scenarios

         A distance matrix was calculated which represented distance of each scenario with the other scenario. The matrix size was S1 X S1

         For each scenario the farthest distance was taken, hence we had S1 farthest distances.

         Among these S1 farthest distances the least one was chosen with its scenario. Let this distance be D. This scenario gives us an approximate idea about the center.

         Starting with this scenario as the first pivot, find another scenario which is at least D*alpha distance away from the first pivot and call this as second pivot. Here alpha is any number less than one, greater the value of alpha less the resulting number of scenarios.

          Now we find third scenario which is at least D distance away from the first two pivots and call it third pivot.

         Hence we find the Nth scenario which is at least D distance from the preceding N-1 pivots.

         We carry out this proves till the end. Depending on the size of alpha we will have the numbers of scenarios.

         After the pivots are formed we map S2 scenarios on them and assign the probabilities as per the number of scenarios mapped to them.

 For the experimentation purpose I used S2 = 5000 and S1 = 2000. But we can play around with values of S1 and alpha.

 How I compared the results

 I calculate the VaRs associated with the S2 scenarios and compared with the VaR associated with the sampled scenarios.

 The algorithm is excellent in the tail cases reason being

 How : There is minimum error in VaR 99.99 in comparison to VaR 95.

 Why : The reason is that while making pivots we do not prune the universe. We make the pivots out of all the given scenarios. Also we do not remove the pivots that have very less number of scenarios mapped to them.

Hence we include all the extreme cases.

Also this way it is easy to figure out why there is more error in 95 %ile case as compared to the 99.99% case. As in the VaR 95%ile case, there are scenarios which are the dispersed ones and hence might be being compared with the ones who are not as similar to them.

But as the extreme scenarios are unique, lesser in number and above all that are not pruned hence we find minimum error in them.

Advertisements

Validation plan of FX Vanilla Option Pricing models

Long time back I validated FX Vanilla Option Pricing model. I would like to present some suggestions on it’s validation.

The validation plan of FX vanilla option pricing model can be divided into following parts:

  1. Data validation
  2. Conceptual soundness
  3. Outcomes analysis
  4. Ongoing monitoring

Data validation will involve:

  1. Check of data consistency between current source, risk systems, front office pricing systems.
  2. Checking if the right interest rate curves, volatility surfaces are used. It was once observed that because of lack of volatility surface information, the model owners used the surfaces of an alternate currency. If such decision has been made, then then the rationale should be document.
  3. Check of FX spot rates and forward rates.

Here we will have to choose a set of currency pairs on which validation should primarily focus, these choices should include:

  1. Currencies pairs which are adequately liquid
  2. Currency pairs that are major part of the portfolio
  3. Currency pairs which have currencies whose exchange rates are available only with indirect quotes
  4. There should be at least one currency pair that is not adequately liquid, so that the illiquidity impact may be studied.

Conceptual Soundness can be checked by the following steps:

  1. The first step should be the external replication of all the following:
    1. Pricing of the options
    2. Calculation of the sensitivities
    3. Calculation of the volatility surfaces
    4. Construction of the interest rate curves and interpolation methodologies
    5. If the risk system has to be validated then the VaR calculations must be replicated.
  2. The external replication should be compared with all possible borderline cases of all the variables and their possible combinations.
  3. The second part should be exploring the possibilities of benchmarking the currently used methodologies for pricing, interpolation, volatility curve constructions by
    1. External sources: Usage of those sources should be justified by the validator.
    2. Alternate methodologies: Again, same as above
    3. The results should be used either to challenge the model output or support the model output.
  4. If Garman–Kohlhagen pricing formula is used then the data assumptions should be checked by studying the historical data.
    1. This formula assumes that the spot rates are lognormally distributed, this should be checked by studying the historical data.
    2. The consistency and inconsistencies of interest rate party using historical data should be checked.

Outcomes Analysis: A report should be developed where it should be discussed that which currency pairs demonstrate consistencies in above the properties mentioned.

Ongoing Monitoring

The ongoing monitoring report should document the periodic model performance compared with the external data and realized trades. The monitoring report should also document the data deviations and their impact.

 

Dear reader, first of all thank you for visiting this page! In case you think that the validation can be improved then please leave a suggestion, I will include your suggestion with acknowledgment.

FX Call Option = Put Option from Counterparty’s Perspective

FX call option is an option to buy a foreign currency at a particular domestic currency price. If we look from the counterparty’s perspective it is also a put option to sell the domestic currency at that price of foreign currency.

Let us take an example of USD/EUR currency option. Suppose the price of 1 unit of Euro is X and strike price is K and c is the price of call option in USD.

On the other side, the price of the USD in Euro is 1/X and the strike will be 1/K to buy 1 unit of USD in Euro. Let the price of this put option will be p_f.

The options c and p_f are similar in nature and hence their price should be identical. But if you calculate it, it won’t be identical. You would be required to make exchange rate adjustments and notional assumptions. 

The p_f will have to be multiplied by X to convert the option price in domestic currency.

The resultant value will have to be further multiplied by K because the notional amount in the call is in Euro.

The resultant price will be equal to c.

Summarizing c = p_f * X *K

What senior investment leaders expect from their risk analysts

Senior investment managers and leaders have historically very good understanding of the market. They are the ones who understand the data intuitively.

With time the world has become more integrated and businesses have diversified. This has resulted in diversified data. Even though the markets are getting complicated like never before the senior investment professionals still are the very best in understanding. They still understand their data. The challenge they find is deriving knowledge from that data because current market data if seen individually can be easily understood by them, but when there are many data points, it becomes overwhelming to even the seasoned professional to derive knowledge from it.

From my experience in various projects with such professionals, I came to know that when they come to the risk analysts with data and problem, they are generally not looking for any sophisticated statistical model. They  are looking for a user interface tool kit which simplifies their data mining process and hence extract the knowledge from the data easily.

I would like to share some of my personal project experience.

Tool box for a hedge fund manager

A hedge fund manager had a portfolio of revolver loans. He had to sell that portfolio to his client. In order to make him the sale, the manager has to show his client what can be the possible worst loss the portfolio can give.

The data he had was history of last 10 years of each of his loan customer and what was the outstanding loan that customer had. The customers who breached the limit 80% withdrawal from the allocated amount were considered risky. He came to our team with a request to develop a tool which could mine knowledge from his shared data by performing historical simulation of random sub sample. We offered him to perform  additional statistical analysis but he was not interested.

So we developed a tool which could perform multiple sample size simulation as per his choice he wanted to demonstrate his client that among all the customers:

  1. On a given year, what percentage of the customers were in the risky territory.
  2. Which sectors were the most risky?
  3. How much dollar value of the portfolio of each segment was at risk?
  4. What is the average out standing to loan for a random sample?
  5. What is the standard deviation of the outstanding to loan for the chosen sample?

Our tool gave him freedom to choose multiple sample at a time, his choice of time period. All the visualizations were in graphical and simple tabular format. By being empowered with his own data he was able to make the sale!

Expected Loss calculator for a Loan Portfolio Manager

A senior loan portfolio manager managed multi-billion dollar loan portfolio. He had exposure in various segments.

He identified a set of risk factors which impacted the exposure of his portfolio. He request us to develop a tool box using which he could measure the change in exposure.

We developed an excel based tool where he could vary the risk factors and check the change in portfolio valuation sector wise.

PCA component analysis for an Interest Rate Strategist

Principal Component Analysis is perhaps the most effective tool in dealing with highly correlated data. Interest rate curve data is one of the best example of such data. Their is high auto-correlation within a historical series as well has very high co-linearity within the tenors. Hence PCA is favorite statistical tool for interest rate strategists.

Many years ago, even though excel was favorite tool for business professionals, excel based tools were not easily found over the internet. An interest rate strategist came to us with a request to develop an excel based tool which calculated PCA for any historical series of currency of his choice.

We offered him to include additional sophisticated statistical models to help him in his analysis. He was only interested PCA analysis. After further discussion, we came to know that he relied on his own judgment about the data and required a tool which could do PCA and addition could give very good visualization of the results.

Conclusion

Often investment managers and leaders reach out to risk analysts with their business problems associated with data mining. These people are in the market have developed their expertise in its understanding. The problems they come with are not generally within regulatory domain but are about data mining.

When the business problems are not within regulatory  domain, analysts instead of providing them additional statistical analysis from what required should instead focus in empowering them better visualization of their requested analysis. They should develop the requested tools which give them more authority in modifying their analysis with various perspectives.

P&L attribution challenge in FRTB compliance

FRTB’s P&L attribution test requirements are based on two metrics:

  1. Mean unexplained daily P&L (ie: risk theoretical P&L minus hypothetical P&L) over the standard deviation of hypothetical daily P&Lratio1

N is number of trading days in the month. Ratio1 has to be between -10% to +10%.

  1. The ratio of variances of unexplained daily P&L and hypothetical daily P&L.

rati2

Ratio2 has to be less than 20%.

These ratios are calculated monthly and reported prior to the end of the following month. If the first ratio is outside of the range of -10% to +10% or if the second ratio were in excess of 20% then the desk experiences a breach. If the desk experiences four or more breaches within the prior 12 months then it must be capitalized under the standardized approach. The desk must remain on the standardized approach until it can pass the monthly P&L attribution requirement and provided it has satisfied its backtesting exceptions requirements over the prior 12 months.

The real challenge:

Let’ look at variance(P&LRisk – P&LHypo)

Volatility of P&LRisk  = σ(P&LRisk)

Volatility of P&LHypo  = σ(P&LHypo)

Then variance(P&LRisk – P&LHypo) = σ2(P&LHypo) + σ2(P&LRisk) – 2×ρ × σ(P&LRisk) × σ(P&LHypo)

For simplicity let us assume that σ(P&LRisk) = σ(P&LHypo). Then

ratio3

As per FRTB guidelines,  (2-2 ×ρ) < 20%

This implies ρ>90%

The following article explains these challenges from the practical perspective:

FRTB Compliance – Implementation Challenges http://www.garp.org/#!/risk-intelligence/culture-governance/compliance/a1Z40000003PBRnEAO/frtb-compliance-implementation-challenges

 

Risk management systems and front office systems

Often validators have to validate derivative pricing models. Banks generally have two systems: one for risk management and other front office. It is advisable that both the systems should be exactly similar but practically this is practice is not common. Due to the impact front office system play in trading, front office systems are more sophisticated. This makes them expensive. So for that reasons often banks use cheaper or home made systems for their risk management requirements.

Front office engine’s pricing model’s calculations tend to be more accurate due to their impact in businesses’s PnL directly. Pricing models in risk systems are simplified because in risk calculation exsercises like VaR/PFE calculations a derivative has to be priced multiple times, so there is a time constraints.

Difference in observed in derivative pricing models in front and risk management system from input data perspective and model perspective.

Input data perspective:

  1. Front office systems uses sophisticated techniques in interpolation of yield curve where as risk management systems get away by simple linear interpolation.
  2. Options based products use volatility smile (simple equity options), volatility surface  (FX options, caps/floors), volatility cubes (swaptions).
    1. Data in front office is generally has more data points in smiles, surfaces or cubes than in comparison with to risk management systems.
    2. Interpolation follow the same rule as discussed above.
  3. Risk models require historical data to calculate VaR/PFE. Often for illiquid currencies as well as for exotic derivatives historical data is not adequate, in these scenarios alternate data is used. For example if volatility of a particular currency is not available then volatility of an alternative currency is used as a proxy.

The challenge multiplies when there are multiple currencies in the portfolio. The issue of data triangulation multiplies because often in risk management, same simplifications are applied to each currency which may not be valid for every currency. Often risk managers prioritize their data based on their portfolio concentration.

Model perspective:

For exotic derivatives front office pricing models use sophisticated techniques like Monte Carlo simulations where as risk models use approximate techniques. These approximate techniques are often analytical approximations of derivative pricers where analytical solution is not tractable. For example: To price an American put option front office uses Monte Carlo simulations but it is generally observed that the risk models use approximated analytical formulas.

Due to such reasons there are often differences in pricing of front office and risk systems. Validators often use one system to benchmark another and hence use them as a leverage in validation exercise. Even though the pricing may not match but sensitivities should match.

Need for FRTB

I attended an FRTB conference on last Thursday/Friday.

There a speaker gave a very nice and interesting overview about why FRTB on the first place. Listening to him was a treat to ears!

I was able to capture some notes from his presentation.

During pre-crisis era

  1. Rules between trading book and banking boundary were imprecise and regulators could not stop the regulatory arbitrage that was happening in the banking sector by transferring the assets between banking and trading book and vice versa.
  2. There was no desk wise visibility for the regulators. Regulators had right to look into the details within the enterprise level but because there were no set procedures around it, so regulators seldom looked into details.
    1. Because of this reason regulators were not able to question the desks which were carrying out complex trades under seniority tranches.
  3. There was no regulatory tool which they could use to stop the model usage. Regulators could question the model usage but had to deal each model individually. In the due process many risks remained uncovered.
  4. There was not linkage between standardized approach and model based approach. There was no logical or intuitive relationship between the capitals calculated between these two approaches.
  5. Degree of complexity was not punished in a way regulators deemed fit, rating migrations/default risk were not captured.
  6. With VaR approach regulators and risk managers were looking at a point where the tail began rather than what was within the tail.
  7. 2008 crisis proved that the way liquidity was measured was inadequate.

After the crisis

Regulators came up with Basel 2.5/3 which was a quick patch up activity to address some of the above challenges

  1. Rating migration risk was captured by including CVA in capital calculations.
  2. Liquidity was captured by introducing the multiplication factor 3.
  3. Tail risk was captured by calculating the VaR in stressed market conditions.
  4. A complex calculation was introduced to capture incremental risk charges.
  5. Comprehensive risk measures were applied to correlation trading books.

FRTB rules were decided in a course of almost 5 years. They address all the above issues and also include

  1. The need to address data quality issues by including non modelable risk factors.
  2. Comparability of Standardized Approach and Internal Model Approach by making Standardized Approach risk sensitive.
  3. Mandatory PnL attribution hence triangulation between front office data and risk management data.

 

I will update this page as I learn and understand FRTB more.