What senior investment leaders expect from their risk analysts

Senior investment managers and leaders have historically very good understanding of the market. They are the ones who understand the data intuitively.

With time the world has become more integrated and businesses have diversified. This has resulted in diversified data. Even though the markets are getting complicated like never before the senior investment professionals still are the very best in understanding. They still understand their data. The challenge they find is deriving knowledge from that data because current market data if seen individually can be easily understood by them, but when there are many data points, it becomes overwhelming to even the seasoned professional to derive knowledge from it.

From my experience in various projects with such professionals, I came to know that when they come to the risk analysts with data and problem, they are generally not looking for any sophisticated statistical model. They  are looking for a user interface tool kit which simplifies their data mining process and hence extract the knowledge from the data easily.

I would like to share some of my personal project experience.

Tool box for a hedge fund manager

A hedge fund manager had a portfolio of revolver loans. He had to sell that portfolio to his client. In order to make him the sale, the manager has to show his client what can be the possible worst loss the portfolio can give.

The data he had was history of last 10 years of each of his loan customer and what was the outstanding loan that customer had. The customers who breached the limit 80% withdrawal from the allocated amount were considered risky. He came to our team with a request to develop a tool which could mine knowledge from his shared data by performing historical simulation of random sub sample. We offered him to perform  additional statistical analysis but he was not interested.

So we developed a tool which could perform multiple sample size simulation as per his choice he wanted to demonstrate his client that among all the customers:

  1. On a given year, what percentage of the customers were in the risky territory.
  2. Which sectors were the most risky?
  3. How much dollar value of the portfolio of each segment was at risk?
  4. What is the average out standing to loan for a random sample?
  5. What is the standard deviation of the outstanding to loan for the chosen sample?

Our tool gave him freedom to choose multiple sample at a time, his choice of time period. All the visualizations were in graphical and simple tabular format. By being empowered with his own data he was able to make the sale!

Expected Loss calculator for a Loan Portfolio Manager

A senior loan portfolio manager managed multi-billion dollar loan portfolio. He had exposure in various segments.

He identified a set of risk factors which impacted the exposure of his portfolio. He request us to develop a tool box using which he could measure the change in exposure.

We developed an excel based tool where he could vary the risk factors and check the change in portfolio valuation sector wise.

PCA component analysis for an Interest Rate Strategist

Principal Component Analysis is perhaps the most effective tool in dealing with highly correlated data. Interest rate curve data is one of the best example of such data. Their is high auto-correlation within a historical series as well has very high co-linearity within the tenors. Hence PCA is favorite statistical tool for interest rate strategists.

Many years ago, even though excel was favorite tool for business professionals, excel based tools were not easily found over the internet. An interest rate strategist came to us with a request to develop an excel based tool which calculated PCA for any historical series of currency of his choice.

We offered him to include additional sophisticated statistical models to help him in his analysis. He was only interested PCA analysis. After further discussion, we came to know that he relied on his own judgment about the data and required a tool which could do PCA and addition could give very good visualization of the results.

Conclusion

Often investment managers and leaders reach out to risk analysts with their business problems associated with data mining. These people are in the market have developed their expertise in its understanding. The problems they come with are not generally within regulatory domain but are about data mining.

When the business problems are not within regulatory  domain, analysts instead of providing them additional statistical analysis from what required should instead focus in empowering them better visualization of their requested analysis. They should develop the requested tools which give them more authority in modifying their analysis with various perspectives.

Advertisements

P&L attribution challenge in FRTB compliance

FRTB’s P&L attribution test requirements are based on two metrics:

  1. Mean unexplained daily P&L (ie: risk theoretical P&L minus hypothetical P&L) over the standard deviation of hypothetical daily P&Lratio1

N is number of trading days in the month. Ratio1 has to be between -10% to +10%.

  1. The ratio of variances of unexplained daily P&L and hypothetical daily P&L.

rati2

Ratio2 has to be less than 20%.

These ratios are calculated monthly and reported prior to the end of the following month. If the first ratio is outside of the range of -10% to +10% or if the second ratio were in excess of 20% then the desk experiences a breach. If the desk experiences four or more breaches within the prior 12 months then it must be capitalized under the standardized approach. The desk must remain on the standardized approach until it can pass the monthly P&L attribution requirement and provided it has satisfied its backtesting exceptions requirements over the prior 12 months.

The real challenge:

Let’ look at variance(P&LRisk – P&LHypo)

Volatility of P&LRisk  = σ(P&LRisk)

Volatility of P&LHypo  = σ(P&LHypo)

Then variance(P&LRisk – P&LHypo) = σ2(P&LHypo) + σ2(P&LRisk) – 2×ρ × σ(P&LRisk) × σ(P&LHypo)

For simplicity let us assume that σ(P&LRisk) = σ(P&LHypo). Then

ratio3

As per FRTB guidelines,  (2-2 ×ρ) < 20%

This implies ρ>90%

The following article explains these challenges from the practical perspective:

FRTB Compliance – Implementation Challenges http://www.garp.org/#!/risk-intelligence/culture-governance/compliance/a1Z40000003PBRnEAO/frtb-compliance-implementation-challenges

 

Risk management systems and front office systems

Often validators have to validate derivative pricing models. Banks generally have two systems: one for risk management and other front office. It is advisable that both the systems should be exactly similar but practically this is practice is not common. Due to the impact front office system play in trading, front office systems are more sophisticated. This makes them expensive. So for that reasons often banks use cheaper or home made systems for their risk management requirements.

Front office engine’s pricing model’s calculations tend to be more accurate due to their impact in businesses’s PnL directly. Pricing models in risk systems are simplified because in risk calculation exsercises like VaR/PFE calculations a derivative has to be priced multiple times, so there is a time constraints.

Difference in observed in derivative pricing models in front and risk management system from input data perspective and model perspective.

Input data perspective:

  1. Front office systems uses sophisticated techniques in interpolation of yield curve where as risk management systems get away by simple linear interpolation.
  2. Options based products use volatility smile (simple equity options), volatility surface  (FX options, caps/floors), volatility cubes (swaptions).
    1. Data in front office is generally has more data points in smiles, surfaces or cubes than in comparison with to risk management systems.
    2. Interpolation follow the same rule as discussed above.
  3. Risk models require historical data to calculate VaR/PFE. Often for illiquid currencies as well as for exotic derivatives historical data is not adequate, in these scenarios alternate data is used. For example if volatility of a particular currency is not available then volatility of an alternative currency is used as a proxy.

The challenge multiplies when there are multiple currencies in the portfolio. The issue of data triangulation multiplies because often in risk management, same simplifications are applied to each currency which may not be valid for every currency. Often risk managers prioritize their data based on their portfolio concentration.

Model perspective:

For exotic derivatives front office pricing models use sophisticated techniques like Monte Carlo simulations where as risk models use approximate techniques. These approximate techniques are often analytical approximations of derivative pricers where analytical solution is not tractable. For example: To price an American put option front office uses Monte Carlo simulations but it is generally observed that the risk models use approximated analytical formulas.

Due to such reasons there are often differences in pricing of front office and risk systems. Validators often use one system to benchmark another and hence use them as a leverage in validation exercise. Even though the pricing may not match but sensitivities should match.

Need for FRTB

I attended an FRTB conference on last Thursday/Friday.

There a speaker gave a very nice and interesting overview about why FRTB on the first place. Listening to him was a treat to ears!

I was able to capture some notes from his presentation.

During pre-crisis era

  1. Rules between trading book and banking boundary were imprecise and regulators could not stop the regulatory arbitrage that was happening in the banking sector by transferring the assets between banking and trading book and vice versa.
  2. There was no desk wise visibility for the regulators. Regulators had right to look into the details within the enterprise level but because there were no set procedures around it, so regulators seldom looked into details.
    1. Because of this reason regulators were not able to question the desks which were carrying out complex trades under seniority tranches.
  3. There was no regulatory tool which they could use to stop the model usage. Regulators could question the model usage but had to deal each model individually. In the due process many risks remained uncovered.
  4. There was not linkage between standardized approach and model based approach. There was no logical or intuitive relationship between the capitals calculated between these two approaches.
  5. Degree of complexity was not punished in a way regulators deemed fit, rating migrations/default risk were not captured.
  6. With VaR approach regulators and risk managers were looking at a point where the tail began rather than what was within the tail.
  7. 2008 crisis proved that the way liquidity was measured was inadequate.

After the crisis

Regulators came up with Basel 2.5/3 which was a quick patch up activity to address some of the above challenges

  1. Rating migration risk was captured by including CVA in capital calculations.
  2. Liquidity was captured by introducing the multiplication factor 3.
  3. Tail risk was captured by calculating the VaR in stressed market conditions.
  4. A complex calculation was introduced to capture incremental risk charges.
  5. Comprehensive risk measures were applied to correlation trading books.

FRTB rules were decided in a course of almost 5 years. They address all the above issues and also include

  1. The need to address data quality issues by including non modelable risk factors.
  2. Comparability of Standardized Approach and Internal Model Approach by making Standardized Approach risk sensitive.
  3. Mandatory PnL attribution hence triangulation between front office data and risk management data.

 

I will update this page as I learn and understand FRTB more.

Model Risk Management under FRTB regime

Usually in previous regulatory guidelines standardized approach was not meant to be a model. There was not linkage between standardized approach and internal model based approach.

The regulators were able to compare the capitals under standardized approach between two banks because rules were same for all the banks but for risk managers there was no logical or intuitive relationship between the capitals calculated between standardized and internal model based approach for a given portfolio.

Making Standardized approach risk sensitive is the a path breaking change made by regulators.

In FRTB standardized approach’s risk sensitivities come from the derivative pricing models. These pricing models have uncertain outcomes. Capital has to be calculated at the desk level. So each desk’s model has to go through the normal model risk management process framework where it will be checked for conceptual soundness, outcomes analysis and ongoing monitoring.

Currently (Basel 3 and before) when capital was allocated for specific risk charges, risk managers generally went for lookup tables and apply some formulas. This is no more valid in FRTB. FRTB has introduced the concept of non modellable risk factors. Any risk factor which has less than 24 data points will attract Non Modellable Risk Charge (NMRC).

FRTB also acknowledges that under various situations (specially exotic option pricing) the prescribed delta, gamma and vega approach do not work well. Modelers and subsequently validators need to be cognizant about this issue and ensure that model development framework addresses those issues for those situations.

Under FRTB portfolio level capital has to be developed under three correlation scenarios. At the desk level the scenario with maximum capital value is to be chosen. The calculation approach used to derive capital will come under the purview of model risk.

Default risk charges uses various inputs which are not always straight forward and deterministic. The choices made by the first line of defense would require validation by the second line of defense.

In internal model based approach, Expected Shortfall is calculated.The model is stress calibrated that is: calibrated in stressed scenarios. When this calculation is performed not all the risk factors are used for calibration. The first line of defense has to decide which factors they are using and which they are not using. They then have to justify their choice. The second line of defense will have to review and critique on their choices.

Same approach has to be used for add-ons due to non-modellable risk factors. Based on the exceptions the multipliers have to be decided. Similarly for PnL attribution the lines of defense need to decide the approach and that approach will be reviewed by second line of defense.

Huge opportunities in Ongoing Monitoring

By insisting capital allocation at desk level regulators want banks to keep detailed tab on each and every trading activity. Even if a desk qualifies for Internal Model Based approach, it is mandatory to calculate capital with standardized approach.

This will result in massive amount of data being generated which would be required to be monitored. Banks will require sophisticated model monitoring frameworks which would keep an eye on outliers, do pattern analysis of each data sequence generated, so that timely warning signals would be raised.

To handle this amount of data banks will require sophisticated technological framework which would use advanced statistical tools to perform pattern analysis of the risk data generated.

The triggers would not only be pattern dependent but also a combination of risk measures. Lets understand it with an example. Suppose a time sequence of (gamma OR vega) is showing some out-liers. The monitoring system should raise a trigger but it may not be critical. But suppose this pattern is observed in (gamma AND vega) sequence then it may really a case where risk managers should be cautioned. This will give rise the research on a new field of financial risk management.

Banks may choose to develop such frameworks in-house or may choose to employ vendor models.

From the vendor perspectives there will be huge competition among vendors where each vendor will claim that his model monitoring framework is better because better and advanced techniques used with it. Hence there would lie further issues of disclosure of methodologies and techniques by the vendors. No vendor would like to disclose his methodology of model monitoring framework fearing to be copied by the competitor.

Concluding thoughts

From market risk management framework it appears that regulators have made a giant leap to understand and manage risk but because of the data intensiveness of the suggested approach and new run from ongoing monitoring will start.

 

There is also a publication: Model Risk Management under the FRTB Regime

Link: http://www.garp.org/#!/risk-intelligence/detail/a1Z40000003LViKEAW/model-risk-management-under-frtb-regime

 

I will update this page as I learn and understand FRTB more.

What concerns banks most on FRTB implementation

I have been looking into FRTB since last couple of months and after reading various Point of Views, talking to the experts and listening their views, I feel that the key concerns banks have about FRTB implementation are:

  1. Most important concern is the data and its alignment.
  2. Need for national regulator’s clarity on the local level implementation.
  3. Overall cost of implementation. (need to understand the bank’s internal systems)
  4. Analytics of the FRTB
  5. Viability of the business
  6. Estimates on the ongoing business as usual costs.

Reason-ability of these concerns:

Data and its alignment

Due to non alignment of front office and risk management data PnL attribution will be a challenge. Especially when front office data contains PnL composed of intraday trades and risk data PnL is composed of portfolio snapshots. Also the pricing methodologies, choice of data for Marking to Market of the derivatives need to be aligned.

For many banks FRTB implementation will require significant changes to the current market and risk infrastructure. This may include:

  1. Comparing risk factors captured in risk management tools and pricing models.
  2. Compare the banks current methodologies to measure risk factor sensitivities to prescribed methodologies under the sensitivity based approach and then undertake the remediation efforts to support the standardized calculations.
  3. Identifying sources of gaps in transaction data that may impact the real price criteria and determine the infrastructure and processes required in remediation of such gaps.
  4. Analyzing the sources of reference data needed for the standardized approach.
  5. Assessing hardware and calculation efficiency needed to meet the increased amounts of computations and data storage.
  6. FRTB programs need to be coordinated with other programs like BCBS 239.
Another big challenge is that whenever the new risk systems are implemented or even some times when upgraded, the departments cannot let go the existing systems. It is because many of the legacy issues of personal, processes will be hampered and hence create disruptions.

So the biggest challenge executives have: it is the fragmented systems. This problem gets into data, model, their monitoring and practically every process related to the risk management.

This fragmented data, systems’ and processes’s problem is multiplied because in addition to organic growth banks inorganically from by continuous acquisitions, mergers of businesses. These acquired businesses may be from different domain what banks initially have. If acquisition in different geographic location, the systems may not only face challenge of different regulatory territory but challenge as basic as difference in language.

Regular churning of the portfolio based on “at the moment” business and economic scenarios also compounds this challenge.

There is article which I would like the reader to check: FRTB Compliance – Implementation Challenges  http://www.garp.org/#!/risk-intelligence/culture-governance/compliance/a1Z40000003PBRnEAO/frtb-compliance-implementation-challenges

Non Modelable Risk Factors also pose serious challenges in FRTB compliance. For pricing a security, the current available at the point market data may be sufficient but the FRTB compliance demands that risk systems use same risk factors for pricing as the front office. For risk calculation purpose that data may not available in history.

National regulator’s clarity on the local level implementation

This is off course a serious concern because it is relevant to start planning the implementation of the framework. But historically national regulators seldom deviate materially from the basic idea. So this concern though serious but is not critical.

But though not critical there can/would be lots of confusion for banks which have global presence and thus operate in multiple jurisdictions. It is because every jurisdiction will have its own requirement for FRTB implementation for bank’s local and international assets. Some of them can be:

  1. Definition of exotics and complex instruments for residual add-ons by various jurisdictions.
  2. FRTB has enhanced supervisory powers in approval of transfering instruments from banking to trading books. For  banks working on multiple jurisdictions, the variation of supervisor’s opinion will pose challenges in implementation.

Overall cost of implementation

FRTB is about making the risk systems transparent and comparable. Banks who have followed a disciplined approach in ensuring this as a culture will not incur huge costs.

The new trading and banking book boundary will lead to increased operational costs with multi-faceted approaches and technology infrastructure changes.

Analytics of the FRTB

This is not the most challenging issue of FRTB. Banks have already develop analytical frameworks for back testing their VaR models. Meeting the FRTB guidelines would not demand  any significant analytics challenges.

Through PnL attribution you are supposed to capture the unexplained PnL between risk systems PnL and the front office PnL.

Due to non alignment of front office and risk management data PnL attribution will be a challenge. Especially when front office data contains PnL composed of intraday trades and risk data PnL is composed of portfolio snapshots. Also the pricing methodologies, choice of data for Marking to Market of the derivatives need to be aligned.

Viability of business

Failure of internal models in PnL attribution will force banks to fall back on Standardized Approach which will cause raise in capital requirements. This may make trading desks/business less attractive in terms of profits. It is possible that some desks/businesses might be shut.

Estimates on the ongoing business as usual costs

There is mention of word “report” 69 times in the final document released on Jan 16th. Clearly FRTB has made all the risk processes more sensitive and those sensitivities would need to be recorded, monitored and periodically analyzed. These will be significant costs on establishing model/process monitoring frameworks. This culture will give risk to the discipline in the risk processes and hence returns will be seen in ling term.

 

I will update this page as I learn and understand FRTB more.

Digitization of Financial Risk Model Life Cycle

The future of robust financial risk management exists in integrated risk management system where various risk related process are digitized so that there is minimal manual touch points, the process are formalized and uniform across the financial institution.

Various technology vendors have up their ante to provide technological platform for the risk management activities to their financial clients. Often a disconnect in understanding exists in the expectations of the risk managers and services claimed to be provided by the technology vendors. We in this article try to fill that gap by stating expectations from a financial risk manager’s perspective.

First step in any risk management process is business understanding. Risk managers should have a clear understanding of the portfolio for which they are planning to quantify the risk with the company’s business objectives.

For any digitized framework, it will be important to be able to help risk managers in recording their understanding of the business easily. The framework should enable the risk managers to document discussions with the business managers, help them to make the flow of business process impacting the model they are developing as well as processes impacted from the output of their models.

Second step is understanding of data associated with the business process and its subsequent cleansing. The digitization framework should enable the risk manager to understand

  1. Properties of data like probability of default cannot be more than 1, stock prices are more than zero.
  2. It should be able to help the risk manager to triangulate the data from its origin. For example a given loan portfolio has three different models predicting Probability of Default (PD), Loss Given Default (LGD) and Exposure at Default (EAD), These models are developed by separate teams. These teams should be able to ensure that the data they are using is consistent. Interestingly it often happens that a portfolio of 100 billion may have 6 different PD models (say 10B, 20B, 30B, 10B, 20B and 20B) and 4 LGD (25B each) models and an EAD model catering to a 250B portfolio where this 100B is a part of that portfolio.
  3. The framework should also ensure that the data cleansing experience is recorded so that there may be possibility of automating it as the process reaches its standardization.
  4. In case there is limited data, the risk system should be able to tell the risk manager about similar portfolio within the organization whose data may be used to have deeper understanding of the business risks problems.

Often a disconnect exists among the risk modeling teams. A team developing PD model for a portfolio is not aware about the LGD model developed for that portfolio. As discussed in point 2 there may not be a dedicated PD, LGD or EAD model for a given portfolio. This makes the problem further complex. Risk system should be able to tell at the data set level where and how any data is being used.

Third step of MRM is development, testing and implementation of model. Choosing methodology is unique to the underlying data and its properties. For simple regression based models the digital framework should provide all the statistical measures which are relevant to ensure that the regression is correct. It should be able to warn/explain the risk managers if there is a possibility of spurious regression. Two practical examples we would like to share

  1. When data is limited, regression relationships are held ransom by some outlier observations. Their inclusion/exclusion makes or breaks the relationship.
  2. Due to absence of stationarity, statistical measures may present false validity of the regression relationship.

There should be a provision to record such experience study so that future model developers and validators can use that to ensure robustness in the risk models.

When models are just implementation of standard methodologies, they are easily implemented in the risk systems. But it is also common that model methodology represents a business process. It is a challenge to develop such models completely system based.

They require advanced tools like R, Matlab, SAS or C++ etc. The digitized framework should be robust enough to ensure that

  1. Data inputs/outputs are easily imported/exported in/out of these programming languages.
  2. It supports the automated running to codes written in various programming languages.

It is also the responsibility of the developers that in case the model is coded by them externally, it is robust enough that it can be run in automated fashion with minimal human intervention. The should follow coding best practices like no hard coding of numbers, coding the model in any ‘one’ language of their choice etc.

Forth step is model documentation. This is the most important aspect in risk management and probably the most value add any digitized framework can provide. The digitized framework should provide

  1. Possibility to simultaneously develop the documentation starting from the business understanding to final step of model monitoring process.
  2. Hence it should be a representative of status the risk management exercise for any senior manager and leader.

Next steps in MRM is model validation step. There are two schools of thoughts in the financial risk management industry around model validation:

  1. Model validation exercise starts after completion of model development.
    • Benefits: Ensures that model validation is independent of model development.
    • Limitations: It is a sequential process and hence time consuming and even more time consuming when model is failed by the validators or later by auditors.
  2. Model validation exercise runs parallel to model development.
    • Benefits: Model validators are part of discussions since the very beginning and give independent opinion of the modeling process on agile basis. Hence the whole risk measurement exercise is done in shorter duration.
    • Limitations: Higher possibility that independence of model validation and development is compromised.

The digitization engine should ensure that in both the above scenarios validators are able to independently critique the business process, able to test the model for its conceptual validity and perform their independent back testing and sensitivity analysis.

Once the model is validated and implemented, for ongoing monitoring of the model the digitized framework should be able to

  • Keep track of the model outputs, back test and sensitivity results.
  • Risk managers should be able to compare the results with similar model in the company and hence draw an inference about the business and any model risk.
  • Using the historical outputs and inputs risk managers should be able to find model sensitivity to the risk factors and recognize patterns using advanced analytics.

Apart of smoothing the process of model risk management, the biggest impact digitization will and is bring in the process is a sense of discipline and a degree of uniformity/standardization in the management of financial risk within the industry. This will make easy for the financial institutions to share the best practices in risk management. Due to standardization it will be possible for banks to share the relevant risk experience at “arm’s length” within peers. For example

  1. Experience related to operational risk.
  2. Experience related to Loss Given Default for industrial benchmarking.
  3. Experience related to money laundering so that strong and robust anti-money laundering strategies could be developed.