Risk management systems and front office systems

Often validators have to validate derivative pricing models. Banks generally have two systems: one for risk management and other front office. It is advisable that both the systems should be exactly similar but practically this is practice is not common. Due to the impact front office system play in trading, front office systems are more sophisticated. This makes them expensive. So for that reasons often banks use cheaper or home made systems for their risk management requirements.

Front office engine’s pricing model’s calculations tend to be more accurate due to their impact in businesses’s PnL directly. Pricing models in risk systems are simplified because in risk calculation exsercises like VaR/PFE calculations a derivative has to be priced multiple times, so there is a time constraints.

Difference in observed in derivative pricing models in front and risk management system from input data perspective and model perspective.

Input data perspective:

  1. Front office systems uses sophisticated techniques in interpolation of yield curve where as risk management systems get away by simple linear interpolation.
  2. Options based products use volatility smile (simple equity options), volatility surface  (FX options, caps/floors), volatility cubes (swaptions).
    1. Data in front office is generally has more data points in smiles, surfaces or cubes than in comparison with to risk management systems.
    2. Interpolation follow the same rule as discussed above.
  3. Risk models require historical data to calculate VaR/PFE. Often for illiquid currencies as well as for exotic derivatives historical data is not adequate, in these scenarios alternate data is used. For example if volatility of a particular currency is not available then volatility of an alternative currency is used as a proxy.

The challenge multiplies when there are multiple currencies in the portfolio. The issue of data triangulation multiplies because often in risk management, same simplifications are applied to each currency which may not be valid for every currency. Often risk managers prioritize their data based on their portfolio concentration.

Model perspective:

For exotic derivatives front office pricing models use sophisticated techniques like Monte Carlo simulations where as risk models use approximate techniques. These approximate techniques are often analytical approximations of derivative pricers where analytical solution is not tractable. For example: To price an American put option front office uses Monte Carlo simulations but it is generally observed that the risk models use approximated analytical formulas.

Due to such reasons there are often differences in pricing of front office and risk systems. Validators often use one system to benchmark another and hence use them as a leverage in validation exercise. Even though the pricing may not match but sensitivities should match.

Need for FRTB

I attended an FRTB conference on last Thursday/Friday.

There a speaker gave a very nice and interesting overview about why FRTB on the first place. Listening to him was a treat to ears!

I was able to capture some notes from his presentation.

During pre-crisis era

  1. Rules between trading book and banking boundary were imprecise and regulators could not stop the regulatory arbitrage that was happening in the banking sector by transferring the assets between banking and trading book and vice versa.
  2. There was no desk wise visibility for the regulators. Regulators had right to look into the details within the enterprise level but because there were no set procedures around it, so regulators seldom looked into details.
    1. Because of this reason regulators were not able to question the desks which were carrying out complex trades under seniority tranches.
  3. There was no regulatory tool which they could use to stop the model usage. Regulators could question the model usage but had to deal each model individually. In the due process many risks remained uncovered.
  4. There was not linkage between standardized approach and model based approach. There was no logical or intuitive relationship between the capitals calculated between these two approaches.
  5. Degree of complexity was not punished in a way regulators deemed fit, rating migrations/default risk were not captured.
  6. With VaR approach regulators and risk managers were looking at a point where the tail began rather than what was within the tail.
  7. 2008 crisis proved that the way liquidity was measured was inadequate.

After the crisis

Regulators came up with Basel 2.5/3 which was a quick patch up activity to address some of the above challenges

  1. Rating migration risk was captured by including CVA in capital calculations.
  2. Liquidity was captured by introducing the multiplication factor 3.
  3. Tail risk was captured by calculating the VaR in stressed market conditions.
  4. A complex calculation was introduced to capture incremental risk charges.
  5. Comprehensive risk measures were applied to correlation trading books.

FRTB rules were decided in a course of almost 5 years. They address all the above issues and also include

  1. The need to address data quality issues by including non modelable risk factors.
  2. Comparability of Standardized Approach and Internal Model Approach by making Standardized Approach risk sensitive.
  3. Mandatory PnL attribution hence triangulation between front office data and risk management data.

 

I will update this page as I learn and understand FRTB more.

Model Risk Management under FRTB regime

Usually in previous regulatory guidelines standardized approach was not meant to be a model. There was not linkage between standardized approach and internal model based approach.

The regulators were able to compare the capitals under standardized approach between two banks because rules were same for all the banks but for risk managers there was no logical or intuitive relationship between the capitals calculated between standardized and internal model based approach for a given portfolio.

Making Standardized approach risk sensitive is the a path breaking change made by regulators.

In FRTB standardized approach’s risk sensitivities come from the derivative pricing models. These pricing models have uncertain outcomes. Capital has to be calculated at the desk level. So each desk’s model has to go through the normal model risk management process framework where it will be checked for conceptual soundness, outcomes analysis and ongoing monitoring.

Currently (Basel 3 and before) when capital was allocated for specific risk charges, risk managers generally went for lookup tables and apply some formulas. This is no more valid in FRTB. FRTB has introduced the concept of non modellable risk factors. Any risk factor which has less than 24 data points will attract Non Modellable Risk Charge (NMRC).

FRTB also acknowledges that under various situations (specially exotic option pricing) the prescribed delta, gamma and vega approach do not work well. Modelers and subsequently validators need to be cognizant about this issue and ensure that model development framework addresses those issues for those situations.

Under FRTB portfolio level capital has to be developed under three correlation scenarios. At the desk level the scenario with maximum capital value is to be chosen. The calculation approach used to derive capital will come under the purview of model risk.

Default risk charges uses various inputs which are not always straight forward and deterministic. The choices made by the first line of defense would require validation by the second line of defense.

In internal model based approach, Expected Shortfall is calculated.The model is stress calibrated that is: calibrated in stressed scenarios. When this calculation is performed not all the risk factors are used for calibration. The first line of defense has to decide which factors they are using and which they are not using. They then have to justify their choice. The second line of defense will have to review and critique on their choices.

Same approach has to be used for add-ons due to non-modellable risk factors. Based on the exceptions the multipliers have to be decided. Similarly for PnL attribution the lines of defense need to decide the approach and that approach will be reviewed by second line of defense.

Huge opportunities in Ongoing Monitoring

By insisting capital allocation at desk level regulators want banks to keep detailed tab on each and every trading activity. Even if a desk qualifies for Internal Model Based approach, it is mandatory to calculate capital with standardized approach.

This will result in massive amount of data being generated which would be required to be monitored. Banks will require sophisticated model monitoring frameworks which would keep an eye on outliers, do pattern analysis of each data sequence generated, so that timely warning signals would be raised.

To handle this amount of data banks will require sophisticated technological framework which would use advanced statistical tools to perform pattern analysis of the risk data generated.

The triggers would not only be pattern dependent but also a combination of risk measures. Lets understand it with an example. Suppose a time sequence of (gamma OR vega) is showing some out-liers. The monitoring system should raise a trigger but it may not be critical. But suppose this pattern is observed in (gamma AND vega) sequence then it may really a case where risk managers should be cautioned. This will give rise the research on a new field of financial risk management.

Banks may choose to develop such frameworks in-house or may choose to employ vendor models.

From the vendor perspectives there will be huge competition among vendors where each vendor will claim that his model monitoring framework is better because better and advanced techniques used with it. Hence there would lie further issues of disclosure of methodologies and techniques by the vendors. No vendor would like to disclose his methodology of model monitoring framework fearing to be copied by the competitor.

Concluding thoughts

From market risk management framework it appears that regulators have made a giant leap to understand and manage risk but because of the data intensiveness of the suggested approach and new run from ongoing monitoring will start.

 

There is also a publication: Model Risk Management under the FRTB Regime

Link: http://www.garp.org/#!/risk-intelligence/detail/a1Z40000003LViKEAW/model-risk-management-under-frtb-regime

 

I will update this page as I learn and understand FRTB more.

What concerns banks most on FRTB implementation

I have been looking into FRTB since last couple of months and after reading various Point of Views, talking to the experts and listening their views, I feel that the key concerns banks have about FRTB implementation are:

  1. Most important concern is the data and its alignment.
  2. Need for national regulator’s clarity on the local level implementation.
  3. Overall cost of implementation. (need to understand the bank’s internal systems)
  4. Analytics of the FRTB
  5. Viability of the business
  6. Estimates on the ongoing business as usual costs.

Reason-ability of these concerns:

Data and its alignment

Due to non alignment of front office and risk management data PnL attribution will be a challenge. Especially when front office data contains PnL composed of intraday trades and risk data PnL is composed of portfolio snapshots. Also the pricing methodologies, choice of data for Marking to Market of the derivatives need to be aligned.

For many banks FRTB implementation will require significant changes to the current market and risk infrastructure. This may include:

  1. Comparing risk factors captured in risk management tools and pricing models.
  2. Compare the banks current methodologies to measure risk factor sensitivities to prescribed methodologies under the sensitivity based approach and then undertake the remediation efforts to support the standardized calculations.
  3. Identifying sources of gaps in transaction data that may impact the real price criteria and determine the infrastructure and processes required in remediation of such gaps.
  4. Analyzing the sources of reference data needed for the standardized approach.
  5. Assessing hardware and calculation efficiency needed to meet the increased amounts of computations and data storage.
  6. FRTB programs need to be coordinated with other programs like BCBS 239.
Another big challenge is that whenever the new risk systems are implemented or even some times when upgraded, the departments cannot let go the existing systems. It is because many of the legacy issues of personal, processes will be hampered and hence create disruptions.

So the biggest challenge executives have: it is the fragmented systems. This problem gets into data, model, their monitoring and practically every process related to the risk management.

This fragmented data, systems’ and processes’s problem is multiplied because in addition to organic growth banks inorganically from by continuous acquisitions, mergers of businesses. These acquired businesses may be from different domain what banks initially have. If acquisition in different geographic location, the systems may not only face challenge of different regulatory territory but challenge as basic as difference in language.

Regular churning of the portfolio based on “at the moment” business and economic scenarios also compounds this challenge.

There is article which I would like the reader to check: FRTB Compliance – Implementation Challenges  http://www.garp.org/#!/risk-intelligence/culture-governance/compliance/a1Z40000003PBRnEAO/frtb-compliance-implementation-challenges

Non Modelable Risk Factors also pose serious challenges in FRTB compliance. For pricing a security, the current available at the point market data may be sufficient but the FRTB compliance demands that risk systems use same risk factors for pricing as the front office. For risk calculation purpose that data may not available in history.

National regulator’s clarity on the local level implementation

This is off course a serious concern because it is relevant to start planning the implementation of the framework. But historically national regulators seldom deviate materially from the basic idea. So this concern though serious but is not critical.

But though not critical there can/would be lots of confusion for banks which have global presence and thus operate in multiple jurisdictions. It is because every jurisdiction will have its own requirement for FRTB implementation for bank’s local and international assets. Some of them can be:

  1. Definition of exotics and complex instruments for residual add-ons by various jurisdictions.
  2. FRTB has enhanced supervisory powers in approval of transfering instruments from banking to trading books. For  banks working on multiple jurisdictions, the variation of supervisor’s opinion will pose challenges in implementation.

Overall cost of implementation

FRTB is about making the risk systems transparent and comparable. Banks who have followed a disciplined approach in ensuring this as a culture will not incur huge costs.

The new trading and banking book boundary will lead to increased operational costs with multi-faceted approaches and technology infrastructure changes.

Analytics of the FRTB

This is not the most challenging issue of FRTB. Banks have already develop analytical frameworks for back testing their VaR models. Meeting the FRTB guidelines would not demand  any significant analytics challenges.

Through PnL attribution you are supposed to capture the unexplained PnL between risk systems PnL and the front office PnL.

Due to non alignment of front office and risk management data PnL attribution will be a challenge. Especially when front office data contains PnL composed of intraday trades and risk data PnL is composed of portfolio snapshots. Also the pricing methodologies, choice of data for Marking to Market of the derivatives need to be aligned.

Viability of business

Failure of internal models in PnL attribution will force banks to fall back on Standardized Approach which will cause raise in capital requirements. This may make trading desks/business less attractive in terms of profits. It is possible that some desks/businesses might be shut.

Estimates on the ongoing business as usual costs

There is mention of word “report” 69 times in the final document released on Jan 16th. Clearly FRTB has made all the risk processes more sensitive and those sensitivities would need to be recorded, monitored and periodically analyzed. These will be significant costs on establishing model/process monitoring frameworks. This culture will give risk to the discipline in the risk processes and hence returns will be seen in ling term.

 

I will update this page as I learn and understand FRTB more.

Digitization of Financial Risk Model Life Cycle

The future of robust financial risk management exists in integrated risk management system where various risk related process are digitized so that there is minimal manual touch points, the process are formalized and uniform across the financial institution.

Various technology vendors have up their ante to provide technological platform for the risk management activities to their financial clients. Often a disconnect in understanding exists in the expectations of the risk managers and services claimed to be provided by the technology vendors. We in this article try to fill that gap by stating expectations from a financial risk manager’s perspective.

First step in any risk management process is business understanding. Risk managers should have a clear understanding of the portfolio for which they are planning to quantify the risk with the company’s business objectives.

For any digitized framework, it will be important to be able to help risk managers in recording their understanding of the business easily. The framework should enable the risk managers to document discussions with the business managers, help them to make the flow of business process impacting the model they are developing as well as processes impacted from the output of their models.

Second step is understanding of data associated with the business process and its subsequent cleansing. The digitization framework should enable the risk manager to understand

  1. Properties of data like probability of default cannot be more than 1, stock prices are more than zero.
  2. It should be able to help the risk manager to triangulate the data from its origin. For example a given loan portfolio has three different models predicting Probability of Default (PD), Loss Given Default (LGD) and Exposure at Default (EAD), These models are developed by separate teams. These teams should be able to ensure that the data they are using is consistent. Interestingly it often happens that a portfolio of 100 billion may have 6 different PD models (say 10B, 20B, 30B, 10B, 20B and 20B) and 4 LGD (25B each) models and an EAD model catering to a 250B portfolio where this 100B is a part of that portfolio.
  3. The framework should also ensure that the data cleansing experience is recorded so that there may be possibility of automating it as the process reaches its standardization.
  4. In case there is limited data, the risk system should be able to tell the risk manager about similar portfolio within the organization whose data may be used to have deeper understanding of the business risks problems.

Often a disconnect exists among the risk modeling teams. A team developing PD model for a portfolio is not aware about the LGD model developed for that portfolio. As discussed in point 2 there may not be a dedicated PD, LGD or EAD model for a given portfolio. This makes the problem further complex. Risk system should be able to tell at the data set level where and how any data is being used.

Third step of MRM is development, testing and implementation of model. Choosing methodology is unique to the underlying data and its properties. For simple regression based models the digital framework should provide all the statistical measures which are relevant to ensure that the regression is correct. It should be able to warn/explain the risk managers if there is a possibility of spurious regression. Two practical examples we would like to share

  1. When data is limited, regression relationships are held ransom by some outlier observations. Their inclusion/exclusion makes or breaks the relationship.
  2. Due to absence of stationarity, statistical measures may present false validity of the regression relationship.

There should be a provision to record such experience study so that future model developers and validators can use that to ensure robustness in the risk models.

When models are just implementation of standard methodologies, they are easily implemented in the risk systems. But it is also common that model methodology represents a business process. It is a challenge to develop such models completely system based.

They require advanced tools like R, Matlab, SAS or C++ etc. The digitized framework should be robust enough to ensure that

  1. Data inputs/outputs are easily imported/exported in/out of these programming languages.
  2. It supports the automated running to codes written in various programming languages.

It is also the responsibility of the developers that in case the model is coded by them externally, it is robust enough that it can be run in automated fashion with minimal human intervention. The should follow coding best practices like no hard coding of numbers, coding the model in any ‘one’ language of their choice etc.

Forth step is model documentation. This is the most important aspect in risk management and probably the most value add any digitized framework can provide. The digitized framework should provide

  1. Possibility to simultaneously develop the documentation starting from the business understanding to final step of model monitoring process.
  2. Hence it should be a representative of status the risk management exercise for any senior manager and leader.

Next steps in MRM is model validation step. There are two schools of thoughts in the financial risk management industry around model validation:

  1. Model validation exercise starts after completion of model development.
    • Benefits: Ensures that model validation is independent of model development.
    • Limitations: It is a sequential process and hence time consuming and even more time consuming when model is failed by the validators or later by auditors.
  2. Model validation exercise runs parallel to model development.
    • Benefits: Model validators are part of discussions since the very beginning and give independent opinion of the modeling process on agile basis. Hence the whole risk measurement exercise is done in shorter duration.
    • Limitations: Higher possibility that independence of model validation and development is compromised.

The digitization engine should ensure that in both the above scenarios validators are able to independently critique the business process, able to test the model for its conceptual validity and perform their independent back testing and sensitivity analysis.

Once the model is validated and implemented, for ongoing monitoring of the model the digitized framework should be able to

  • Keep track of the model outputs, back test and sensitivity results.
  • Risk managers should be able to compare the results with similar model in the company and hence draw an inference about the business and any model risk.
  • Using the historical outputs and inputs risk managers should be able to find model sensitivity to the risk factors and recognize patterns using advanced analytics.

Apart of smoothing the process of model risk management, the biggest impact digitization will and is bring in the process is a sense of discipline and a degree of uniformity/standardization in the management of financial risk within the industry. This will make easy for the financial institutions to share the best practices in risk management. Due to standardization it will be possible for banks to share the relevant risk experience at “arm’s length” within peers. For example

  1. Experience related to operational risk.
  2. Experience related to Loss Given Default for industrial benchmarking.
  3. Experience related to money laundering so that strong and robust anti-money laundering strategies could be developed.

 

The Cato Summit on Financial Regulation – June 2 2015

On June 2 2015 I attended a summit in NYC, topic being: Capital Unbound: The Cato Summit on Financial Regulation.

 I would like to present a brief summary about this summit.

The summit was about industry response towards current challenges regulatory policies pose to the financial institution. It was a philosophical critique of SECs action and its counterproductive impacts.

The speakers gave a rhetoric view that because of regulatory policies the current growth is late to arrive and also there is no guarantee that this growth will be sustainable.

Key points:

  • Reason why earlier recession arrived was because of the irresponsible underwriting of housing mortgages to subprime customers. This policy of underwriting was started by Fannie Mae and Freddie Mac which was followed by the private players. So the fall in house prices created a systemic challenge. Because both the organization were govt. sponsored so it was the govt which started this trend.
  • Speakers claim that government (Fed, SEC etc.) have not addressed this key issue but rigorously started putting risk limitations in other commercial, consumer lending. They also claim that their interference in the bank’s trading book have severely limited the business viability.
  • They claim that policies of government are creating opportunities to create even more SIFIs and then limiting them in their activities. They claim that smaller banks are becoming more and more vulnerable because of regulatory compliances. They claimed that this possess a severe limitations on banks to act as a small business lenders and thus hamper a long term growth of the country.
  • The speakers claim that it would have been a good idea if the banks were allowed to fail, that would have resulted in a stronger financial system.

Key Speakers were:

  • Commissioner J. Christopher Giancarlo, U.S. Commodity Futures Trading Commission
  • Kevin Dowd, Professor of Finance & Economics, Durham University and Adjunct Scholar, Cato Institute
  • Commissioner Michael Piwowar, U.S. Securities and Exchange Commission

Further details can be found at: http://www.cato.org/events/capital-unbound-cato-summit-financial-regulation

Books provided in the summit:

As I said earlier these comments were more of a rhetorical views in the limited time span of the summit, the organizers provided several complimentary books to explain their conviction. The books I was able to collect are the following:

  • The Leadership Crisis and the Free Market Cure: Why the Future of Business Depends on the Return to Life, Liberty, and the Pursuit of Happiness

http://www.amazon.com/Leadership-Crisis-Free-Market-Cure/dp/0071831118/ref=sr_1_1?ie=UTF8&qid=1433345037&sr=8-1&keywords=1%29%09The+Leadership+crisis+and+the+free+market+cure

  • The Financial Crisis and the Free Market Cure: Why Pure Capitalism is the World Economy’s Only Hope

http://www.amazon.com/Financial-Crisis-Free-Market-Cure/dp/0071806776/ref=sr_1_2?ie=UTF8&qid=1433345037&sr=8-2&keywords=1%29%09The+Leadership+crisis+and+the+free+market+cure

  • The Libertarian Mind: A Manifesto for Freedom

http://www.amazon.com/Libertarian-Mind-Manifesto-Freedom/dp/1476752842/ref=sr_1_1?s=books&ie=UTF8&qid=1433345183&sr=1-1&keywords=the+libertarian+mind

  • Reckless Endangerment: How Outsized Ambition, Greed, and Corruption Created the Worst Financial Crisis of Our Time

http://www.amazon.com/Reckless-Endangerment-Outsized-Corruption-Financial/dp/1250008794/ref=sr_1_1?s=books&ie=UTF8&qid=1433345228&sr=1-1&keywords=reckless+endangerment

I have checked the reviews of these books in Amazon, all are rated 4.5/5

PRMIA Model Risk Management conference NYC, Oct 15th 2015

I attended model risk management conference organized by PRMIA

http://www.prmia.org/civicrm/event/info?id=6716&reset=1

It was a very generic conference on Model Risk management not focused towards any specific agenda. I would like to list out key points which were talked about:

  1. Documentation of model is key challenge for model risk. There are many reasons of lack of proper documentation of models, namely:
    1. Model developers are quants and are not interested in documenting their models because this is not interesting for them.
    2. Business owners are do not want to document their business activity that translates into model because they do not want to disclose their business strategies. Once they document them then those strategies will be on public domain.
  2. There should be a specific team whose responsibility is documentation.
  3. The panel stressed the importance of translators. These are those people who understand business, quants and IT aspects. So they will co-ordinate with these respective teams and hence limit model risk.
    1. This will give rise to the role of consultants.
  4. Model risk rise due to lack of understanding of the ecosystem. When two banks merge, in addition to merging of balance sheets they should also merge the models. There was an incident where two banks were merged and its two divisions were selling a derivative at two different prices. The raised an opportunity of making arbitrage within the bank.
  5. It is observed that when the banks fail a regulatory test (CCAR) they make huge investments in hiring quants for model development and model validation. Ideally they should hire more business professionals who can understand their business and streamline their processes. This streamlining will help in robust model development and hence limit model risk.
  6. Model risk also arises because lack of questions being asked by the risk managers when they receive the risk numbers from the business. There needs to be a cultural shift where risk managers should insist on explanation from the business on the numbers they provide.
  7. Ongoing monitoring is very important in aspect to check model risk. Best practices need to be developed to handle these processes. Banks can:
    1. Automate the whole process: This will give risk to new model risk.
    2. Transfer the process to countries where cost of low.
    3. Prepare dashboards which alerts the stakeholder in case the model is performing not on expected lines.
  8. Irony is that banks invest in risk management when they fail regulatory test and not when they pass giving rise to model risk.

My opinion

I have opinion on one point: Business owners being afraid in disclosing their strategies.

Being in business because you have a secret sauce with respect to your business is a thing of past now. Winners will be those who understand their customers well.

 

Date: Oct 16 2016

Interesting article by the speaker:

https://www.linkedin.com/pulse/all-model-risk-comes-from-models-martin-goldberg-ph-d-?trk=hp-feed-article-title-like