Categories

## Are VaRs normal?

An article in the FT’s recent special report on Finance and Insurance started from the premise that VaR models were a significant factor in landing banks with huge losses in the wake of the collapse of the US housing market, and went on to discuss how new models are being developed to overcome some of their limitations. Part of the point is valid — there were many models that didn’t predict the collapse. But the article is positively misleading in places. For instance, it implies that VaR models are based on the normal distribution:

VaR models forecast profit and loss, at a certain confidence level, based on a bell-shaped, or “normal”, distribution of probabilities.

In fact Value at Risk, or VaR, is a statistical estimate that can be based on any distribution. And it’s pretty obvious that for many financial applications a normal distribution would be inappropriate. The people who develop these risk models are pretty bright, and that won’t have escaped them. The real problem is that it’s difficult to work out what would be a good distribution to use — or, more accurately, it’s difficult to parameterise the distribution. To get an accurate figure for a VaR you need to know the shape of the distribution out in the tails. And for that, you need data. But by definition, the situations out in the tails aren’t encountered very often, so there’s not much data. And that applies whatever the distribution you’re using. So simply moving away from the normal distribution to something a bit more sexy isn’t necessarily going to make a huge difference to the accuracy of the models.

The article goes on to discuss the use of Monte Carlo models to calculate VaR. Monte Carlo models are useful if the mathematics of the distribution you are using don’t lend themselves to simple analytic solutions. But they don’t stop you having to know the shape of the distribution out in the tails. So they do help extend the range of distributions that can usefully be used, but it’s still a VaR model.

And that’s another problem entirely. VaR, like any other statistical estimate (such as mean, median or variance) is just a single number that summarises one aspect of a complex situation. Given a probability and a time period, it gives you a threshold value for your loss (or profit —  but in risk management applications, it’s usually the loss that’s of interest). So you can say, for instance, that there’s a .5% chance that, over a period of one year, your loss will be £100m or more. But it doesn’t tell you how much more than the threshold value your loss could be — £200m? £2bn?

And it’s a statistical estimate, too. .5% may seem very unlikely, but it can happen.

I wouldn’t disagree that a reliance on VaR models contributed to banks’ losses, but I would express it more as an over-reliance on models, full stop. It’s really difficult to model things out in the tails, whatever type of model you are using.

Categories

## SunTrust earnings restatement

In late 2004 one of the big US banks, SunTrust Banks, announced a restatement of earnings for the first two quarters. This followed problems they had found with their loan loss allowances, or more specifically, with the model they were using for their loan loss allowances. Part of their press release says: “There were numerous errors in the loan loss allowance calculations for the first and second quarters, including data, model and formulaic errors.” In other words, they are saying that the data that went into the model was wrong, the model itself was not a good fit to reality, and on top of that they hadn’t even implemented this faulty model properly. That covers pretty much everything that can go wrong with a model, if you count data as including parameters (see my article how to believe your models).

The fall out from the modelling problem was definitely non trivial. Q1 earnings were restated by 1%, Q2 earnings by 6%. In Q2, the loan loss allowances changed by 90%. The loan loss allowances had been overestimated, so this means that in the second quarter they were out by an order of magnitude. Three people lost their jobs as a result of the problem, including the Chief Credit Officer. The Financial Controller was reassigned to a position “with responsibilities that involve areas other than accounting or financial reporting” – ie, neither finance nor control.

Moreover, SunTrust’s directors were unable to sign off under section 404 of Sarbanes-Oxley at the next year end. They said that they were likely “not be able to conclude that the Company’s internal control over financial reporting was effective at such date.”

So why did all this happen? Well, apparently they were bringing in new processes and a new model in order to comply with Sarbanes-Oxley. This evidently proved more difficult than they anticipated. They say “The Company’s implementation of a new allowance framework in the first quarter was deficient. The deficiencies included inadequate internal control procedures, insufficient validation and testing of the new framework, inadequate documentation and a failure to detect errors in the allowance calculation.” They also point to deficiencies in spotting the problem, and then in doing something about it. In particular, “certain members of the Company’s management did not treat certain matters raised by the Company’s independent auditor with an appropriate level of seriousness.”

The morals are fairly obvious. First, models matter, and mistakes in models can be significant. Second, change is risky. It can be very risky. (On the other hand, not changing also has its risks). Thirdly, take problems seriously.

### Resources

The following external links are relevant:

Categories

## Kodak earnings restatement

In November 2005 Kodak restated its Q3 results by \$9 million. The restatement was attributed to restructuring and severance costs, plus a real estate gain. The restructuring costs were apparently because they got the accounting treatment wrong; we were told that “the magnitude of the worldwide restructuring program the company is undertaking imposes significant challenges to ensure the appropriate accounting”

There was also an error of \$11 million in the severance calculation for just one employee. The error was traced to a faulty spreadsheet: apparently “too many zeros were added to accrued severance”. No payment was actually made to the employee in question (which was lucky for Kodak, but maybe unlucky for the employee). It sounds as if the error was either a simple data entry problem, or, possibly more likely, that the spreadsheet was expecting an entry in \$’000 but got one in \$.

This could be yet another example of a spreadsheet that is theoretically correct, but is not easy to use. If the wrong
information is used for the calculations, then the answers will be wrong: it’s our old friend, Garbage In, Garbage Out.

### Resources

The following external links are relevant:

Categories

## Fannie Mae \$1.2bn honest mistake

In October 2003, about 2 weeks after releasing their third quarter earnings figures, Fannie Mae had to restate their unrealised gains by \$1.2 billion. This was apparently the result of “honest mistakes made in a spreadsheet used in the implementation of a new accounting standard.” Honest mistake or not, \$1.2 billion is a lot of money: more than the \$70 million of Provident Financial in March, or the \$24 million lost by TransAlta in June. It’s reasonably common to see errors of half a million or so, but they don’t usually make the headlines.

Apparently Fannie Mae picked up the error as part of the normal processes of preparing their financial statements for filing. Presumably they failed to pick it up as part of their review process before issuing the earnings statement. They claim that the event demonstrates that their accounting processes and controls work as they should.

Better late than never, I suppose, but I can’t help thinking that their processes and controls should have picked up the problem at an earlier stage. We don’t know whether the mistake was in the model or the implementation (ie, whether they had understood the accounting standard correctly but had made a mistake in the implementation of that understanding, or whether they had misunderstood the new accounting standard). It’s entirely possible that their reviewing processes don’t separate the two issues, thus making it harder to find either kind of mistake.

Let me know if you’d like any of your spreadsheets reviewed, or if you are not sure that your processes and controls are as effective as Fannie Mae’s. Fannie Mae apparently continue to be proud of theirs, so self-confidence isn’t necessarily a foolproof guide.

### Resources

The following external links are relevant:

Categories

## Columbia space shuttle

At a press conference on 8th April 2003, Admiral Hal Gehman, Chairman of the Columbia Accident Investigation Board, discussed the model that was used to analyse the impact damage due to debris. If you recall, the prevalent theory at the time was that this was a major cause of the disaster.

He said “It’s a rudimentary kind of model. It’s essentially an Excel spreadsheet with numbers that go down, and it’s not really not a computational model.” The implication seemed to be that computational models and Excel spreadsheets are incompatible.

However, this is not the case. The real problem with the model was not its implementation, but its basic structure. Apparently it’s a lookup table, populated with data from controlled experiments. Unfortunately the piece of debris under consideration is thought to have had a mass of about 1kg, much larger than any of the experimental objects. The trouble with lookup tables is that they are not much good when it comes to extrapolation beyond the limits of the data.

A predictive model would obviously be more computationally complex, but that does not mean that it would not be possible to implement it in Excel. If the financial services industry is anything to go by, computational complexity has never been a reason for avoiding Excel. On the other hand, implementation in Excel might well be inadvisable, because there are few Excel developers who have the software engineering background to build a sufficiently well tested and robust implementation.

### Resources

The following external links are relevant:

Categories

## Provident Financial modelling problem

On 6th March 2003 Provident Financial Group of Cincinnati announced a restatement of its results for the five financial years from 1997 to 2002. Between 1997 and 1999 Provident created nine pools of car leases. Part of the financial restatement was because the leases were treated off balance sheet, rather than on balance sheet as was later thought to be appropriate. But there was also a significant restatement of earnings, because there was a mistake in the model that calculated the debt amortisation for the leases. It appears that the analysts who built the model used for the first pool “put in the wrong value, and they didn’t accrue enough interest expense over the deal term. The first model that was put together had the problem, and that got carried through the other eight,” according to the Chief Financial Officer, who also went on to say that he did not think other banks had made similar errors. “We made such a unique mistake here that I think it’s unlikely.”

It appears that the error was found when Provident introduced a new financial model that was tested against the original, and that the two models produced different results. They then went back and looked at the original model to see which one was correct. We don’t know that these were spreadsheet models, but it’s entirely possible. And the lack of testing may have led to earned income being overstated by \$70 million over five years. Provident also faces a class action suit from investors.

If I am right, and the erroneous model was a spreadsheet (and from the fact that those who built it were referred as “analysts” rather than “programmers” or “developers” some sort of user-developed software seems likely), this is a classic example of a spreadsheet being built as a one-off and then reused without adequate controls. Later pools must have used a different spreadsheet, as they were not subject to the same restatement.

The CFO has more confidence than I do in the ability of other banks to avoid similar errors.

See the press release from Provident, and press coverage from the Cincinnati Post and New York Times.

### Resources

The following external links are relevant: