Hans Rosling on population growth and other, related things.
John Kay has an excellent piece in the FT about the limits of risk models. As he points out, they are no use for risks that they don’t model. In particular, if they are modelling financial risks, and are calibrated using recent data, they are no use at all for modelling major phase changes.
For example, a risk model for the Swiss franc that was calibrated using daily volatility data from the last few years was pretty useless when the franc was un-pegged from the euro. Such a model made a basic, probably implicit assumption that the peg was in place. It was therefore not modelling all the risks associated with the franc.
As Kay puts it,
The Swiss franc was pegged to the euro from 2011 to January 2015. Shorting the Swiss currency during that period was the epitome of what I call a “tailgating strategy”, from my experience of driving on European motorways. Tailgating strategies return regular small profits with a low probability of substantial loss. While no one can predict when a tailgating motorist will crash, any perceptive observer knows that such a crash is one day likely.
Simon Wren-Lewis, in his mainly macro blog, points out that there is a big disconnect between Conservative party words and action on climate change. Their words are vaguely green, and imply that they take climate change seriously. Their actions are not in the slightest bit green.
In an earlier article, George Monbiot noted a part of the innocently titled ‘Infrastructure Bill’ currently going through parliament. Section (36, p39) is headed ‘Maximising economic recovery of UK petroleum’. Its principle objective is to do just that. Now it does not take a climate scientist to realise that trying to restrict our use of fossil fuels to avoid climate change requires leaving quite a lot of them in the ground. So this bill suggests that whatever the UK government says about climate change, the UK contribution in terms of limiting extraction of oil will be exactly zero.
On the whole, most people in the UK are not climate change deniers, so the Conservative party probably isn’t going to come out openly on that side of the debate. But that won’t mean that they won’t behave like climate change deniers.
A somewhat frequent criticism of common economic theories and frameworks is that they are isolated from real world concerns such as energy and resource constraints: that the concept of limited resources, and ideas like the second law of thermodynamics, simply don’t seem to affect the economics at all. You come across this criticism primarily at what might be called the greener fringe, which means that it is rather poo-poohed by some.
Economic theory is, of course, just a model of the real world. It’s bound to simplify and abstract some aspects, and concentrate on others. And many economists don’t consider resource and environment issues to be of primary concern. But what might economics look like if it did take energy (and other resources) as fundamental to the model, rather than as extras which it would be nice to take into account? Gail Tverberg has an interesting piece in which she explains how she sees energy use as a primary driver of economic growth. It’s a good read, and makes a lot of sense. I’m looking forward to her follow-up piece which she says will talk about debt fits into the picture.
So often, just one number is not only not enough, it’s positively misleading. We often see statistics quoted that, say, the average number of children per family is 1.8. First off, what sort of average? Mean, median or mode? It makes a difference. But really, the problem is that a mean (or median or mode) gives us only very limited information. It doesn’t tell us what the data looks like overall: we get no idea of the shape of the distribution, or the range the data covers, or indeed anything other than this single point.
Many traditional actuarial calculations are the same. The net present value of a series of payments tells us nothing about the period of time over which the payments are due, or how variable their amount is — information which is very important in a wide range of circumstances.
Tim Harford has just written a good piece about how the same is true of government statistics, too. He points out that not only is gdp not good for all purposes (a statement that just about everybody agrees with), but that there are lots of other statistics that are good for some purposes but not others. There is no such thing as a single number that measures everything.
And why should there be? Life, the world and everything is variable and complex. There’s no reason to suppose that just one measurement will be able to sum it all up. We can think of the mean (or any other summary statistic) as a very simple model of the data. So simple that it’s abstracted nearly all the complexity away. The model, like any other model, may be useful for some purposes, but it’s never going to be the only possible, or only useful, model
How many times have you seen a standard disclaimer about past performance not being a good guide to the future? And internally nodded wisely, thinking that of course it’s not, the disclaimer is there to warn less sophisticated people than you.
How many times have you calibrated a model based on past performance? And recent past performance, at that. Are you using inflation estimates of 10% or even 15%? We were, back in the 80s, but that would look pretty stupid now. We looked at what was happening at the time, and assumed that current trends would continue.
There’s a really interesting piece by Ian Kelly over at Pieria that discusses this type of behaviour in the context of macro-economic modelling. But the principles are the same in all modelling, I reckon.
Kelly is actually discussing an article by Lawrence Summers and Lant Pritchett, which explains why China and India are unlikely to continue on their current growth trajectories for the next twenty years. They say “The single most robust empirical finding about economic growth is low persistence of growth rates. Extrapolation of current growth rates into the future is at odds with all empirical evidence about the strength of regression to the mean in growth rates.”
When it comes to forecasting future growth, they suggest, past growth performance is of very little value.
Pritchett and Summers demonstrate this by examining the growth history of different developing economies and comparing the growth rate they enjoyed as they developed, as it were, with the growth rate that followed this period. They found that, in most instances, there was more variation in growth rates over time within the same country than there was between different countries.
Modelling is really difficult. Modelling the future is really, really difficult. It’s not going to be the same as the past, or if it is, it’s going to be the same in ways we don’t expect.
Which is harder to understand, physics or earth system science? Which is more important to get right? Which do film-makers try hardest to get right? Oliver Morton, on his Heliophage blog, says
a lot of people, both film makers and film discussers, think getting physics right, or at least seeming to or trying to, is in some way more important than getting the science of the earthsystem right. This shows, to my mind, strange priorities. The carbon cycle is a lot more easy to understand than general relativity…
He comes to this conclusion in the light of a number of films he’s seen this year. I’m not really a movie person, so don’t know if his impression is representative. But I get uncomfortable whenever I see gross misrepresentations of science, or stuff that’s just plain wrong. And, anyway, the carbon cycle, water cycle and all the other earth system stuff is mainly just physics and chemistry (with a small amount of biology thrown in).
Here’s a long but fascinating post on deciphering the opening chord in A Hard Day’s Night. Along the way it gives a good explanation of Fourier transforms for the non-mathematician.
It also gives a really good example of why it’s important to look at the overall reasonableness of a result, rather than blindly relying on the maths. Summary of this bit: somebody ran a Fourier analysis on the chord, assumed that the loudest frequencies were the fundamentals, and came up with the following notes:
- Harrison (electric 12 string guitar): A2 A3, D3 D4, G3 G4, C4 C4
- McCartney (electric bass guitar): D3
- Lennon (acoustic 6 string guitar): C5
- Martin (piano): D3 F3 D5 G5 E6
But why would Lennon play only one note? It was meant to be a dramatic opening, give it all you’ve got. Single notes just don’t cut it.
This observation led to a realisation that the assumption that the loudest frequencies were the fundamentals was flawed. Record producers often fiddle with the frequencies, and in this case it appears that the bass frequencies were turned down.
There’s lots of other interesting stuff in the post about how the author went about his detective work, and what really hits home to me is the variety of techniques he used. The moral of the story is to use all the information at your disposal, not just the maths.
There’s a lovely piece in Pieria about a data visualisation exhibition at the British Library, positing John Graunt’s analysis of London deaths as an early spreadsheet.
And yes, there were some errors in it.