Interest Rate Sampling Algorithm

In the year 2007 when the computation power was limited, I developed the below algorithm. I doubt that today it is relevant because now we have huge computation power also not sure if it would pass model validation scrutiny.

Business problem:

The interest rate risk department wanted to calculate the interest rate risk associated one year down the line, with the surplus (assets minus liabilities) of various product lines.

Our previous methodology

1) We used some interest rate scenario and generated 300,000 scenarios, one year down the line, the input being the present day yield curve.

2) We then used duration convexity methodology to calculate the change in the Market value of the surplus.

3) Hence we have 300,000 values at risk, hence we calculated the VaR 95, 99, 99.9 and 99.99.

Problems

It is now trivial to know that the number 300,000 is a very huge number and is a big bottle neck if we carry out some rigorous scenario analysis. Hence there arose a need of some sampling methodology.

For this I tried to experiment with a research paper. The research paper was as follows:

 http://www.cwu.edu/~chueh/naaj0207_8.pdf

But I found that the pivoting algorithm’s time complexity was a big bottleneck. So I designed my own pivoting algorithm. I explain that algorithm.

         First S1 scenarios were chosen randomly from the collection of S2 scenarios

         A distance matrix was calculated which represented distance of each scenario with the other scenario. The matrix size was S1 X S1

         For each scenario the farthest distance was taken, hence we had S1 farthest distances.

         Among these S1 farthest distances the least one was chosen with its scenario. Let this distance be D. This scenario gives us an approximate idea about the center.

         Starting with this scenario as the first pivot, find another scenario which is at least D*alpha distance away from the first pivot and call this as second pivot. Here alpha is any number less than one, greater the value of alpha less the resulting number of scenarios.

          Now we find third scenario which is at least D distance away from the first two pivots and call it third pivot.

         Hence we find the Nth scenario which is at least D distance from the preceding N-1 pivots.

         We carry out this proves till the end. Depending on the size of alpha we will have the numbers of scenarios.

         After the pivots are formed we map S2 scenarios on them and assign the probabilities as per the number of scenarios mapped to them.

 For the experimentation purpose I used S2 = 5000 and S1 = 2000. But we can play around with values of S1 and alpha.

 How I compared the results

 I calculate the VaRs associated with the S2 scenarios and compared with the VaR associated with the sampled scenarios.

 The algorithm is excellent in the tail cases reason being

 How : There is minimum error in VaR 99.99 in comparison to VaR 95.

 Why : The reason is that while making pivots we do not prune the universe. We make the pivots out of all the given scenarios. Also we do not remove the pivots that have very less number of scenarios mapped to them.

Hence we include all the extreme cases.

Also this way it is easy to figure out why there is more error in 95 %ile case as compared to the 99.99% case. As in the VaR 95%ile case, there are scenarios which are the dispersed ones and hence might be being compared with the ones who are not as similar to them.

But as the extreme scenarios are unique, lesser in number and above all that are not pruned hence we find minimum error in them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s