Categories
Modelling risk management

The limits of models

John Kay has an excellent piece in the FT about the limits of risk models. As he points out, they are no use for risks that they don’t model. In particular, if they are modelling financial risks, and are calibrated using recent data, they are no use at all for modelling major phase changes.

For example, a risk model for the Swiss franc that was calibrated using daily volatility data from the last few years was pretty useless when the franc was un-pegged from the euro. Such a model made a basic, probably implicit assumption that the peg was in place. It was therefore not modelling all the risks associated with the franc.

As Kay puts it,

The Swiss franc was pegged to the euro from 2011 to January 2015. Shorting the Swiss currency during that period was the epitome of what I call a “tailgating strategy”, from my experience of driving on European motorways. Tailgating strategies return regular small profits with a low probability of substantial loss. While no one can predict when a tailgating motorist will crash, any perceptive observer knows that such a crash is one day likely.

 

Categories
Data Modelling

Just one number

So often, just one number is not only not enough, it’s positively misleading. We often see statistics quoted that, say, the average number of children per family is 1.8. First off, what sort of average? Mean, median or mode?  It makes a difference. But really, the problem is that a mean (or median or mode) gives us only very limited information. It doesn’t tell us what the data looks like overall: we get no idea of the shape of the distribution, or the range the data covers, or indeed anything other than this single point.

Many traditional actuarial calculations are the same. The net present value of a series of payments tells us nothing about the period of time over which the payments are due, or how variable their amount is — information which is very important in a wide range of circumstances.

Tim Harford has just written a good piece about how the same is true of government statistics, too. He points out that not only is gdp not good for all purposes (a statement that just about everybody agrees with), but that there are lots of other statistics that are good for some purposes but not others. There is no such thing as a single number that measures everything.

And why should there be? Life, the world and everything is variable and complex. There’s no reason to suppose that just one measurement will be able to sum it all up. We can think of the mean (or any other summary statistic) as a very simple model of the data. So simple that it’s abstracted nearly all the complexity away. The model, like any other model, may be useful for some purposes, but it’s never going to be the only possible, or only useful, model

Categories
Modelling

Past performance is not a good guide to future performance

How many times have you seen a standard disclaimer about past performance not being a good guide to the future? And internally nodded wisely, thinking that of course it’s not, the disclaimer is there to warn less sophisticated people than you.

How many times have you calibrated a model based on past performance? And recent past performance, at that. Are you using inflation estimates of 10% or even 15%? We were, back in the 80s, but that would look pretty stupid now. We looked at what was happening at the time, and assumed that current trends would continue.

There’s a really interesting piece by Ian Kelly over at Pieria that discusses this type of behaviour in the context of macro-economic modelling. But the principles are the same in all modelling, I reckon.

Kelly is actually discussing an article by Lawrence Summers and Lant Pritchett, which explains why China and India are unlikely to continue on their current growth trajectories for the next twenty years. They say “The single most robust empirical finding about economic growth is low persistence of growth rates. Extrapolation of current growth rates into the future is at odds with all empirical evidence about the strength of regression to the mean in growth rates.”

Kelly says:

When it comes to forecasting future growth, they suggest, past growth performance is of very little value.

Pritchett and Summers demonstrate this by examining the growth history of different developing economies and comparing the growth rate they enjoyed as they developed, as it were, with the growth rate that followed this period. They found that, in most instances, there was more variation in growth rates over time within the same country than there was between different countries.

Modelling is really difficult. Modelling the future is really, really difficult. It’s not going to be the same as the past, or if it is, it’s going to be the same in ways we don’t expect.

Categories
Demographics Modelling

Second hand water

Randall Munroe, over at xkcd, does some really funny cartoons, and is a physicist by training. Every week he answers a hypothetical question: this week it’s

How much of the Earth’s currently-existing water has ever been turned into a soft drink at some point in its history?

His answer, by the way, is “not much”. On the other hand, almost all of it has been drunk by a dinosaur.

One of the things I really like about these “what-if” answers is the way they demonstrate one of the important aspects of modelling: working out what’s significant and what’s not. And significance depends very much on what the purpose of the model is. Often, Munroe can make some really sweeping assumptions that are clearly not borne out in practice, but are equally clearly the right approximations to make for his purposes. And sometimes he says that he doesn’t know what assumption to make.

An example of a sweeping assumption comes in the answer to

How close would you have to be to a supernova to get a lethal dose of neutrino radiation?

where he assumes that you’re not going to get killed by being incinerated or vaporised. 

And in answering the question

When, if ever, will Facebook contain more profiles of dead people than of living ones?

the difficult assumption is whether Facebook is a flash in the pan, and stops adding new users, or whether it will become part of the infrastructure, and continue adding new users for ever (or at least for 50 or 60 years). There are also some sweeping demographic assumptions, of course.

(and while you’re at it, read the one on stirring tea)

I’m reminded of two things here. The first is doing mechanics problems in A-level maths: there was nothing difficult about the maths involved, the trick was all in recognising the type of problem. Was it a weightless, inelastic string, or a frictionless surface? It was all about building a really simple model.

The second is those Google interview questions we used to hear so much about, like how many golf balls fit in a school bus, or how many piano tuners there are in the world. The trick with these is to come up with a really simple model and then make reasonable guesses for the assumptions. And, of course, be aware of your model’s limitations.

Categories
Modelling Software

All models are wrong…

… but some are more wrong than others. It’s emerged that a risk calculator for cholesterol-related heart disease risk is giving some highly dubious results. So completely healthy people could start taking unnecessary drugs. It’s not clear if the problem is in the specification or the implementation: but either way, the results seem rather dubious.

The answer was that the calculator overpredicted risk by 75 to 150 percent, depending on the population. A man whose risk was 4 percent, for example, might show up as having an 8 percent risk. With a 4 percent risk, he would not warrant treatment — the guidelines that say treatment is advised for those with at least a 7.5 percent risk and that treatment can be considered for those whose risk is 5 percent.

According to the New York Times (may be gated), questions were raised a year ago, before the calculator was released, but somehow the concerns weren’t passed on to the right people. It’s difficult to tell from a press article, but it appears as if those responsible for the calculator are reacting extremely defensively, and not really admitting that there’s anything wrong with the model.

 while the calculator was not perfect, it was a major step forward, and that the guidelines already say patients and doctors should discuss treatment options rather than blindly follow a calculator

Of course they’re right, in that you should never believe a model to the exclusion of all other evidence, but it’s very difficult for non-experts not to. Somehow, something coming out of a computer always seems more reliable than it actually is.

Categories
Modelling

Implausible assumptions

Antonio Fatas has some interesting things to say about the reliance of economic models on implausible assumptions.

All models rely on assumptions and economic models are known (and made fun of) for relying on very strong assumptions about rationality, perfect information,… Many of these assumptions are unrealistic but they are justified as a way to set a benchmark model around which one is then allowed to model deviations from the assumptions.

The problem is, he says, that an unrealistically high standard of proof is set for departing from the benchmark. Essentially, it’s OK to use the benchmark model, which we all know is just plain wrong, but you need really strong evidence in order to support the use of other assumptions, even though they just might be right.

I suspect the problem is that it’s easy to differentiate the benchmark assumptions: they’re the ones that we wish were true, in a way, because they’ve got nice properties. All the other assumptions, that might actually be true, are messy, and why choose one set over another?