An article in the FT’s recent special report on Finance and Insurance started from the premise that VaR models were a significant factor in landing banks with huge losses in the wake of the collapse of the US housing market, and went on to discuss how new models are being developed to overcome some of their limitations. Part of the point is valid — there were many models that didn’t predict the collapse. But the article is positively misleading in places. For instance, it implies that VaR models are based on the normal distribution:
VaR models forecast profit and loss, at a certain confidence level, based on a bell-shaped, or “normal”, distribution of probabilities.
In fact Value at Risk, or VaR, is a statistical estimate that can be based on any distribution. And it’s pretty obvious that for many financial applications a normal distribution would be inappropriate. The people who develop these risk models are pretty bright, and that won’t have escaped them. The real problem is that it’s difficult to work out what would be a good distribution to use — or, more accurately, it’s difficult to parameterise the distribution. To get an accurate figure for a VaR you need to know the shape of the distribution out in the tails. And for that, you need data. But by definition, the situations out in the tails aren’t encountered very often, so there’s not much data. And that applies whatever the distribution you’re using. So simply moving away from the normal distribution to something a bit more sexy isn’t necessarily going to make a huge difference to the accuracy of the models.
The article goes on to discuss the use of Monte Carlo models to calculate VaR. Monte Carlo models are useful if the mathematics of the distribution you are using don’t lend themselves to simple analytic solutions. But they don’t stop you having to know the shape of the distribution out in the tails. So they do help extend the range of distributions that can usefully be used, but it’s still a VaR model.
And that’s another problem entirely. VaR, like any other statistical estimate (such as mean, median or variance) is just a single number that summarises one aspect of a complex situation. Given a probability and a time period, it gives you a threshold value for your loss (or profit — but in risk management applications, it’s usually the loss that’s of interest). So you can say, for instance, that there’s a .5% chance that, over a period of one year, your loss will be £100m or more. But it doesn’t tell you how much more than the threshold value your loss could be — £200m? £2bn?
And it’s a statistical estimate, too. .5% may seem very unlikely, but it can happen.
I wouldn’t disagree that a reliance on VaR models contributed to banks’ losses, but I would express it more as an over-reliance on models, full stop. It’s really difficult to model things out in the tails, whatever type of model you are using.