**Anecdote 1**

There was a stress testing model. The model developers were calculating the losses based on the historical loss series of an asset portfolio. This was a private equity portfolio.

It had loss numbers from last ~20 quarters. They found a regression relationship with a macro economic (ME) variable. Then they stressed the ME variable and found the stressed loss. They claimed that this is their stressed loss.

Conceptually this argument is incorrect. They were not accounting the change in valuation of the portfolio in the stressed scenario. In stressed scenario not only the losses go up but the returns go down as well. They were not accounting this point hence under estimating the risk in stressed scenario.

They should have used the fair market valuation approach.

**Anecdote 2**

Once there was a model which was based on regression. On the surface all the statistical measure which ensures the correctness of the regression relationship were looking good. There two points which attracted the attention

1) The data series length was 29.

2) There was one outlier point y and regression result y’ was spot on to that point. For all others y values y’ was not accurate.

Re-running the regression after omitting that outlier resulted in failed F-test. It was obvious because that outlier was driving the whole relationship.

The developers argued that they did not find a better relationship and this was the most conservative relationship they found. We checked the data and agreed with the developers. If the data series would have been longer then this problem would not have been that severe.

We recommended two things to the developers:

1) Document this issue, the model development document should contain the challenges you faced and why this is the best option available.

2) We understand and agree with the statement “*All models are wrong*“, so it will be great if the developers could use bootstrapping approach so that a range of coefficients could have been examined and hence the range of the variable of interest.

**Anecdote 3**

There was an LGD model based on the concept of decision trees. The decision tree was developed using a mixture of business judgement and entropy minimization.

In the decision tree, after three level down a node was divided into 2 leafs. One leaf had an LGD 7% and second leaf had an LGD 70%. There were many such instances in the decision tree with such a high differences in leafs.

The model developers were not able to provide the business reasoning that why till three levels those set of obligors were together but after one final bisection one leaf was having an LGD of 7% and other 70%.

**Anecdote 4**

There was an ECap model. The portfolio revenue was following mean reversion. So the valuation of the portfolio was done by discounting forecasted future cashflows. Fixed maintenance cost was deducted from discounted revenue. The resultant was the value of the portfolio.

For ECap calculation monte carlo simulation was performed. A mean reversion model was implemented. Multiple paths was generated (assuming time after 1 year) and for each path discounted present value was calculated. From each discounted revenue value, maintenance cost was deducted.

For ECap, worst worst value which satisfied the required percentile confidence was chosen.

Conceptually ECap = Expected Valuation – Unexpected Valuation (tail side)

So the model was deemed fail.

**Anecdote 5**

There was a regression based valuation model. The model was something like this

Valuation(T) = Function of (change Macro Economic variables, Valuation(0))

The model was failed because

1) the Valuation(T) should depend on Valuation(T-1).

2) Given that model is dependent on Valuation(0) not in Valuation(T-1), same change in Macro economic variables, will produce similar valuation in recession and boom period.

**Anecdote 6**

There was a stress test model. It was a combination of various models. I will use A B C jargon.

Model A and B were the feeder models, their output was an input to the model C.

Model C’s input were:

- Outputs of A and B.
- Some of its own inputs.

In stressed scenarios, the outputs of A and B were stressed as well as the inputs of C were also stressed.

Their stress combination did not make any business sense because the dependencies of each of these inputs change under stressed scenario.

Hence the model was failed.

**Anecdote 7**

The model contained a auto-regression of price series with it’s own lag 1. The R square was 98%.

A clear example of spurious regression.

**Anecdote 8**

There was a regression based valuation model. The model was something like this

Asset Price(T) = function of (Asset Price(T-1)) + alpha*GDP + randomness

The impact of GDP is same even if the Asset price is 100 or 1000.

**I will keep on updating this page.**