Validating AML/Sanctions Models: A Case Study

Last week my white paper on validation of AML/Sanctions model was published by ACAMS. The download link:

https://www.acams.org/white-paper-validating-aml-sanctions-models-a-case-study/

The unique items discussed in the paper were:

1. Explanation of the idea that why a Transaction Monitoring system is a model.

2. Through unambiguous examples the paper explains the importance of the Risk Assessment report in the validation of AML models.

3. Through prescriptive examples, this paper explains

    ·    How by Leveraging Bank’s AML Risk Assessment and regulatory literature material like FFIEC BSA/AML Examination Manual to identify Bank’s relevant red flaggable customer behavior.

    ·    Conversion of those customer behavior into programming codes to identify them in the Bank’s transactions database.

    ·    Developing test cases/examples to explain the identified issues and their materiality in a logical sequence to the stake holders.

4. This paper also explains how Sanctions Screening model can be validated and benchmarked.

This white paper provides the validator with approaches he can use to gain an in-depth understanding of banks’ customer demographics, product profiles, and customer behavior with respect to the products they use. We discuss using data analytics, what steps the validator can take to ensure that the existing rules and assumptions of the model are adequately capturing the risks described in the bank’s risk assessment document.

Using the approaches discussed in this paper, the validator can model and review customer behavior and investigate whether the existing AML models are adequate in capturing any suspicious patterns. Based on the respective bank’s demographics and product offerings, the suggested approaches can be easily enhanced or modified.

Please feel free to post comments/suggestions.

 

Transition Matrix Forecasting Model Validation

The purpose of this model was to project scenarios for Regulatory purposes. Federal Reserve provided forecasted values of various Macro Economic variables for Base, Adverse and Severely Adverse scenarios for 13 quarters.

  1. Quarterly Historical Transition Probability Matrix (TPM) from 2000Q1 to 2017Q2 from Moody’s were used. Thus, there were in total 70 TPMs.
  2. Moody’s also provided a Base Matrix that was long term average based on their business experience.
  3. A Stressed Matrix was calculated based on 2008-2009 recession.
  4. For each historical matrix a weight parameter ‘w’ was calculated which was based on the distance between Based Matrix and Stressed Matrix. Thus, a series of ‘w’ was calculated and this series was of length 70.
  5. This series was regressed with all the Macro Economic variables that were provided by the Fed and the best combination was chosen. The chosen combination was BBB Spread that was difference between treasury and corporate bond rates.
  6. Using the relationship and the Fed’s forecasts ‘w’ values were forecasted for Base, Adverse, and Severely Adverse scenarios.
  7. Using the projected values of ‘w’, the relationship of point 4 was reversed and forecasted TPM was calculated.

Key limitations of this methodology

Firstly, the optimization process limited the values of projected matrix.

Suppose the TPM is an 8×8 matrix, thus it is a 64 variable matrix which is approximated by one weight variable ‘w’. Primarily, ‘w’ averages the volatility of 64 variables and hence understates the volatility for some and overstates the volatility for other variables.

Secondly because of the optimization methodology, the values of projected upper half of the TPM gets capped by the value of Stressed Matrix and floored by the Base Matrix. Similarly, vice versa for the lower half of the TPM.

Thirdly, the historical quarterly TPM are sparse matrices so ‘w’ may not be able to capture the essence.

Backtesting

The methodology can be back tested by multiple approaches.

  1. Monte Carlo simulation

The projection was for 13 quarters, so from 66 TPMs random 13 TPMs were chosen multiple times with and without replacement. The cumulative TPMs were calculated and default values of various ratings were calculated. The project values were compared with various percentiles.

  1. The Severely Adverse projected TPM was compared with 13 quarter matrices starting from 2007Q4 to 2009Q3.
  2. The calculate 66 ‘w’ values were used to recalculated the respective historical TPM and they were compared with the actual historical TPM.

Interest Rate Sampling Algorithm

In the year 2007 when the computation power was limited, I developed the below algorithm. I doubt that today it is relevant because now we have huge computation power also not sure if it would pass model validation scrutiny.

Business problem:

The interest rate risk department wanted to calculate the interest rate risk associated one year down the line, with the surplus (assets minus liabilities) of various product lines.

Our previous methodology

1) We used some interest rate scenario and generated 300,000 scenarios, one year down the line, the input being the present day yield curve.

2) We then used duration convexity methodology to calculate the change in the Market value of the surplus.

3) Hence we have 300,000 values at risk, hence we calculated the VaR 95, 99, 99.9 and 99.99.

Problems

It is now trivial to know that the number 300,000 is a very huge number and is a big bottle neck if we carry out some rigorous scenario analysis. Hence there arose a need of some sampling methodology.

For this I tried to experiment with a research paper. The research paper was as follows:

 http://www.cwu.edu/~chueh/naaj0207_8.pdf

But I found that the pivoting algorithm’s time complexity was a big bottleneck. So I designed my own pivoting algorithm. I explain that algorithm.

         First S1 scenarios were chosen randomly from the collection of S2 scenarios

         A distance matrix was calculated which represented distance of each scenario with the other scenario. The matrix size was S1 X S1

         For each scenario the farthest distance was taken, hence we had S1 farthest distances.

         Among these S1 farthest distances the least one was chosen with its scenario. Let this distance be D. This scenario gives us an approximate idea about the center.

         Starting with this scenario as the first pivot, find another scenario which is at least D*alpha distance away from the first pivot and call this as second pivot. Here alpha is any number less than one, greater the value of alpha less the resulting number of scenarios.

          Now we find third scenario which is at least D distance away from the first two pivots and call it third pivot.

         Hence we find the Nth scenario which is at least D distance from the preceding N-1 pivots.

         We carry out this proves till the end. Depending on the size of alpha we will have the numbers of scenarios.

         After the pivots are formed we map S2 scenarios on them and assign the probabilities as per the number of scenarios mapped to them.

 For the experimentation purpose I used S2 = 5000 and S1 = 2000. But we can play around with values of S1 and alpha.

 How I compared the results

 I calculate the VaRs associated with the S2 scenarios and compared with the VaR associated with the sampled scenarios.

 The algorithm is excellent in the tail cases reason being

 How : There is minimum error in VaR 99.99 in comparison to VaR 95.

 Why : The reason is that while making pivots we do not prune the universe. We make the pivots out of all the given scenarios. Also we do not remove the pivots that have very less number of scenarios mapped to them.

Hence we include all the extreme cases.

Also this way it is easy to figure out why there is more error in 95 %ile case as compared to the 99.99% case. As in the VaR 95%ile case, there are scenarios which are the dispersed ones and hence might be being compared with the ones who are not as similar to them.

But as the extreme scenarios are unique, lesser in number and above all that are not pruned hence we find minimum error in them.

Validation plan of FX Vanilla Option Pricing models

Long time back I validated FX Vanilla Option Pricing model. I would like to present some suggestions on it’s validation.

The validation plan of FX vanilla option pricing model can be divided into following parts:

  1. Data validation
  2. Conceptual soundness
  3. Outcomes analysis
  4. Ongoing monitoring

Data validation will involve:

  1. Check of data consistency between current source, risk systems, front office pricing systems.
  2. Checking if the right interest rate curves, volatility surfaces are used. It was once observed that because of lack of volatility surface information, the model owners used the surfaces of an alternate currency. If such decision has been made, then then the rationale should be document.
  3. Check of FX spot rates and forward rates.

Here we will have to choose a set of currency pairs on which validation should primarily focus, these choices should include:

  1. Currencies pairs which are adequately liquid
  2. Currency pairs that are major part of the portfolio
  3. Currency pairs which have currencies whose exchange rates are available only with indirect quotes
  4. There should be at least one currency pair that is not adequately liquid, so that the illiquidity impact may be studied.

Conceptual Soundness can be checked by the following steps:

  1. The first step should be the external replication of all the following:
    1. Pricing of the options
    2. Calculation of the sensitivities
    3. Calculation of the volatility surfaces
    4. Construction of the interest rate curves and interpolation methodologies
    5. If the risk system has to be validated then the VaR calculations must be replicated.
  2. The external replication should be compared with all possible borderline cases of all the variables and their possible combinations.
  3. The second part should be exploring the possibilities of benchmarking the currently used methodologies for pricing, interpolation, volatility curve constructions by
    1. External sources: Usage of those sources should be justified by the validator.
    2. Alternate methodologies: Again, same as above
    3. The results should be used either to challenge the model output or support the model output.
  4. If Garman–Kohlhagen pricing formula is used then the data assumptions should be checked by studying the historical data.
    1. This formula assumes that the spot rates are lognormally distributed, this should be checked by studying the historical data.
    2. The consistency and inconsistencies of interest rate party using historical data should be checked.

Outcomes Analysis: A report should be developed where it should be discussed that which currency pairs demonstrate consistencies in above the properties mentioned.

Ongoing Monitoring

The ongoing monitoring report should document the periodic model performance compared with the external data and realized trades. The monitoring report should also document the data deviations and their impact.

 

Dear reader, first of all thank you for visiting this page! In case you think that the validation can be improved then please leave a suggestion, I will include your suggestion with acknowledgment.

FX Call Option = Put Option from Counterparty’s Perspective

FX call option is an option to buy a foreign currency at a particular domestic currency price. If we look from the counterparty’s perspective it is also a put option to sell the domestic currency at that price of foreign currency.

Let us take an example of USD/EUR currency option. Suppose the price of 1 unit of Euro is X and strike price is K and c is the price of call option in USD.

On the other side, the price of the USD in Euro is 1/X and the strike will be 1/K to buy 1 unit of USD in Euro. Let the price of this put option will be p_f.

The options c and p_f are similar in nature and hence their price should be identical. But if you calculate it, it won’t be identical. You would be required to make exchange rate adjustments and notional assumptions. 

The p_f will have to be multiplied by X to convert the option price in domestic currency.

The resultant value will have to be further multiplied by K because the notional amount in the call is in Euro.

The resultant price will be equal to c.

Summarizing c = p_f * X *K

What senior investment leaders expect from their risk analysts

Senior investment managers and leaders have historically very good understanding of the market. They are the ones who understand the data intuitively.

With time the world has become more integrated and businesses have diversified. This has resulted in diversified data. Even though the markets are getting complicated like never before the senior investment professionals still are the very best in understanding. They still understand their data. The challenge they find is deriving knowledge from that data because current market data if seen individually can be easily understood by them, but when there are many data points, it becomes overwhelming to even the seasoned professional to derive knowledge from it.

From my experience in various projects with such professionals, I came to know that when they come to the risk analysts with data and problem, they are generally not looking for any sophisticated statistical model. They  are looking for a user interface tool kit which simplifies their data mining process and hence extract the knowledge from the data easily.

I would like to share some of my personal project experience.

Tool box for a hedge fund manager

A hedge fund manager had a portfolio of revolver loans. He had to sell that portfolio to his client. In order to make him the sale, the manager has to show his client what can be the possible worst loss the portfolio can give.

The data he had was history of last 10 years of each of his loan customer and what was the outstanding loan that customer had. The customers who breached the limit 80% withdrawal from the allocated amount were considered risky. He came to our team with a request to develop a tool which could mine knowledge from his shared data by performing historical simulation of random sub sample. We offered him to perform  additional statistical analysis but he was not interested.

So we developed a tool which could perform multiple sample size simulation as per his choice he wanted to demonstrate his client that among all the customers:

  1. On a given year, what percentage of the customers were in the risky territory.
  2. Which sectors were the most risky?
  3. How much dollar value of the portfolio of each segment was at risk?
  4. What is the average out standing to loan for a random sample?
  5. What is the standard deviation of the outstanding to loan for the chosen sample?

Our tool gave him freedom to choose multiple sample at a time, his choice of time period. All the visualizations were in graphical and simple tabular format. By being empowered with his own data he was able to make the sale!

Expected Loss calculator for a Loan Portfolio Manager

A senior loan portfolio manager managed multi-billion dollar loan portfolio. He had exposure in various segments.

He identified a set of risk factors which impacted the exposure of his portfolio. He request us to develop a tool box using which he could measure the change in exposure.

We developed an excel based tool where he could vary the risk factors and check the change in portfolio valuation sector wise.

PCA component analysis for an Interest Rate Strategist

Principal Component Analysis is perhaps the most effective tool in dealing with highly correlated data. Interest rate curve data is one of the best example of such data. Their is high auto-correlation within a historical series as well has very high co-linearity within the tenors. Hence PCA is favorite statistical tool for interest rate strategists.

Many years ago, even though excel was favorite tool for business professionals, excel based tools were not easily found over the internet. An interest rate strategist came to us with a request to develop an excel based tool which calculated PCA for any historical series of currency of his choice.

We offered him to include additional sophisticated statistical models to help him in his analysis. He was only interested PCA analysis. After further discussion, we came to know that he relied on his own judgment about the data and required a tool which could do PCA and addition could give very good visualization of the results.

Conclusion

Often investment managers and leaders reach out to risk analysts with their business problems associated with data mining. These people are in the market have developed their expertise in its understanding. The problems they come with are not generally within regulatory domain but are about data mining.

When the business problems are not within regulatory  domain, analysts instead of providing them additional statistical analysis from what required should instead focus in empowering them better visualization of their requested analysis. They should develop the requested tools which give them more authority in modifying their analysis with various perspectives.

P&L attribution challenge in FRTB compliance

FRTB’s P&L attribution test requirements are based on two metrics:

  1. Mean unexplained daily P&L (ie: risk theoretical P&L minus hypothetical P&L) over the standard deviation of hypothetical daily P&Lratio1

N is number of trading days in the month. Ratio1 has to be between -10% to +10%.

  1. The ratio of variances of unexplained daily P&L and hypothetical daily P&L.

rati2

Ratio2 has to be less than 20%.

These ratios are calculated monthly and reported prior to the end of the following month. If the first ratio is outside of the range of -10% to +10% or if the second ratio were in excess of 20% then the desk experiences a breach. If the desk experiences four or more breaches within the prior 12 months then it must be capitalized under the standardized approach. The desk must remain on the standardized approach until it can pass the monthly P&L attribution requirement and provided it has satisfied its backtesting exceptions requirements over the prior 12 months.

The real challenge:

Let’ look at variance(P&LRisk – P&LHypo)

Volatility of P&LRisk  = σ(P&LRisk)

Volatility of P&LHypo  = σ(P&LHypo)

Then variance(P&LRisk – P&LHypo) = σ2(P&LHypo) + σ2(P&LRisk) – 2×ρ × σ(P&LRisk) × σ(P&LHypo)

For simplicity let us assume that σ(P&LRisk) = σ(P&LHypo). Then

ratio3

As per FRTB guidelines,  (2-2 ×ρ) < 20%

This implies ρ>90%

The following article explains these challenges from the practical perspective:

FRTB Compliance – Implementation Challenges http://www.garp.org/#!/risk-intelligence/culture-governance/compliance/a1Z40000003PBRnEAO/frtb-compliance-implementation-challenges

 

Risk management systems and front office systems

Often validators have to validate derivative pricing models. Banks generally have two systems: one for risk management and other front office. It is advisable that both the systems should be exactly similar but practically this is practice is not common. Due to the impact front office system play in trading, front office systems are more sophisticated. This makes them expensive. So for that reasons often banks use cheaper or home made systems for their risk management requirements.

Front office engine’s pricing model’s calculations tend to be more accurate due to their impact in businesses’s PnL directly. Pricing models in risk systems are simplified because in risk calculation exsercises like VaR/PFE calculations a derivative has to be priced multiple times, so there is a time constraints.

Difference in observed in derivative pricing models in front and risk management system from input data perspective and model perspective.

Input data perspective:

  1. Front office systems uses sophisticated techniques in interpolation of yield curve where as risk management systems get away by simple linear interpolation.
  2. Options based products use volatility smile (simple equity options), volatility surface  (FX options, caps/floors), volatility cubes (swaptions).
    1. Data in front office is generally has more data points in smiles, surfaces or cubes than in comparison with to risk management systems.
    2. Interpolation follow the same rule as discussed above.
  3. Risk models require historical data to calculate VaR/PFE. Often for illiquid currencies as well as for exotic derivatives historical data is not adequate, in these scenarios alternate data is used. For example if volatility of a particular currency is not available then volatility of an alternative currency is used as a proxy.

The challenge multiplies when there are multiple currencies in the portfolio. The issue of data triangulation multiplies because often in risk management, same simplifications are applied to each currency which may not be valid for every currency. Often risk managers prioritize their data based on their portfolio concentration.

Model perspective:

For exotic derivatives front office pricing models use sophisticated techniques like Monte Carlo simulations where as risk models use approximate techniques. These approximate techniques are often analytical approximations of derivative pricers where analytical solution is not tractable. For example: To price an American put option front office uses Monte Carlo simulations but it is generally observed that the risk models use approximated analytical formulas.

Due to such reasons there are often differences in pricing of front office and risk systems. Validators often use one system to benchmark another and hence use them as a leverage in validation exercise. Even though the pricing may not match but sensitivities should match.

Need for FRTB

I attended an FRTB conference on last Thursday/Friday.

There a speaker gave a very nice and interesting overview about why FRTB on the first place. Listening to him was a treat to ears!

I was able to capture some notes from his presentation.

During pre-crisis era

  1. Rules between trading book and banking boundary were imprecise and regulators could not stop the regulatory arbitrage that was happening in the banking sector by transferring the assets between banking and trading book and vice versa.
  2. There was no desk wise visibility for the regulators. Regulators had right to look into the details within the enterprise level but because there were no set procedures around it, so regulators seldom looked into details.
    1. Because of this reason regulators were not able to question the desks which were carrying out complex trades under seniority tranches.
  3. There was no regulatory tool which they could use to stop the model usage. Regulators could question the model usage but had to deal each model individually. In the due process many risks remained uncovered.
  4. There was not linkage between standardized approach and model based approach. There was no logical or intuitive relationship between the capitals calculated between these two approaches.
  5. Degree of complexity was not punished in a way regulators deemed fit, rating migrations/default risk were not captured.
  6. With VaR approach regulators and risk managers were looking at a point where the tail began rather than what was within the tail.
  7. 2008 crisis proved that the way liquidity was measured was inadequate.

After the crisis

Regulators came up with Basel 2.5/3 which was a quick patch up activity to address some of the above challenges

  1. Rating migration risk was captured by including CVA in capital calculations.
  2. Liquidity was captured by introducing the multiplication factor 3.
  3. Tail risk was captured by calculating the VaR in stressed market conditions.
  4. A complex calculation was introduced to capture incremental risk charges.
  5. Comprehensive risk measures were applied to correlation trading books.

FRTB rules were decided in a course of almost 5 years. They address all the above issues and also include

  1. The need to address data quality issues by including non modelable risk factors.
  2. Comparability of Standardized Approach and Internal Model Approach by making Standardized Approach risk sensitive.
  3. Mandatory PnL attribution hence triangulation between front office data and risk management data.

 

I will update this page as I learn and understand FRTB more.

Model Risk Management under FRTB regime

Usually in previous regulatory guidelines standardized approach was not meant to be a model. There was not linkage between standardized approach and internal model based approach.

The regulators were able to compare the capitals under standardized approach between two banks because rules were same for all the banks but for risk managers there was no logical or intuitive relationship between the capitals calculated between standardized and internal model based approach for a given portfolio.

Making Standardized approach risk sensitive is the a path breaking change made by regulators.

In FRTB standardized approach’s risk sensitivities come from the derivative pricing models. These pricing models have uncertain outcomes. Capital has to be calculated at the desk level. So each desk’s model has to go through the normal model risk management process framework where it will be checked for conceptual soundness, outcomes analysis and ongoing monitoring.

Currently (Basel 3 and before) when capital was allocated for specific risk charges, risk managers generally went for lookup tables and apply some formulas. This is no more valid in FRTB. FRTB has introduced the concept of non modellable risk factors. Any risk factor which has less than 24 data points will attract Non Modellable Risk Charge (NMRC).

FRTB also acknowledges that under various situations (specially exotic option pricing) the prescribed delta, gamma and vega approach do not work well. Modelers and subsequently validators need to be cognizant about this issue and ensure that model development framework addresses those issues for those situations.

Under FRTB portfolio level capital has to be developed under three correlation scenarios. At the desk level the scenario with maximum capital value is to be chosen. The calculation approach used to derive capital will come under the purview of model risk.

Default risk charges uses various inputs which are not always straight forward and deterministic. The choices made by the first line of defense would require validation by the second line of defense.

In internal model based approach, Expected Shortfall is calculated.The model is stress calibrated that is: calibrated in stressed scenarios. When this calculation is performed not all the risk factors are used for calibration. The first line of defense has to decide which factors they are using and which they are not using. They then have to justify their choice. The second line of defense will have to review and critique on their choices.

Same approach has to be used for add-ons due to non-modellable risk factors. Based on the exceptions the multipliers have to be decided. Similarly for PnL attribution the lines of defense need to decide the approach and that approach will be reviewed by second line of defense.

Huge opportunities in Ongoing Monitoring

By insisting capital allocation at desk level regulators want banks to keep detailed tab on each and every trading activity. Even if a desk qualifies for Internal Model Based approach, it is mandatory to calculate capital with standardized approach.

This will result in massive amount of data being generated which would be required to be monitored. Banks will require sophisticated model monitoring frameworks which would keep an eye on outliers, do pattern analysis of each data sequence generated, so that timely warning signals would be raised.

To handle this amount of data banks will require sophisticated technological framework which would use advanced statistical tools to perform pattern analysis of the risk data generated.

The triggers would not only be pattern dependent but also a combination of risk measures. Lets understand it with an example. Suppose a time sequence of (gamma OR vega) is showing some out-liers. The monitoring system should raise a trigger but it may not be critical. But suppose this pattern is observed in (gamma AND vega) sequence then it may really a case where risk managers should be cautioned. This will give rise the research on a new field of financial risk management.

Banks may choose to develop such frameworks in-house or may choose to employ vendor models.

From the vendor perspectives there will be huge competition among vendors where each vendor will claim that his model monitoring framework is better because better and advanced techniques used with it. Hence there would lie further issues of disclosure of methodologies and techniques by the vendors. No vendor would like to disclose his methodology of model monitoring framework fearing to be copied by the competitor.

Concluding thoughts

From market risk management framework it appears that regulators have made a giant leap to understand and manage risk but because of the data intensiveness of the suggested approach and new run from ongoing monitoring will start.

 

There is also a publication: Model Risk Management under the FRTB Regime

Link: http://www.garp.org/#!/risk-intelligence/detail/a1Z40000003LViKEAW/model-risk-management-under-frtb-regime

 

I will update this page as I learn and understand FRTB more.