Categories
Uncertainty

Correlation is not causation, part 999

A few weeks ago the Economist’s blog had a piece with the tag line “How increases in computing power have driven higher share turnover”. It shows a nice chart with two lines rising inexorably upwards, pretty close together, one representing the transistor count in integrated circuits from 1947 to date, and the other shares traded on the New York Stock Exchange over the same period.

Computing power has increased some 600-fold over the past 15 years […] This advancement has facilitated the ability to trade ever-larger volumes of shares.

Whenever I read something like this, my knee-jerk reaction is “correlation is not causation”. Just because two phenomena behave in roughly the same way, it doesn’t mean that one of them is causing the other. One of the better known examples of this is the strong statistical association between the annual changes in the S&P 500 stock index and butter production in Bangladesh. Admittedly it’s plausible that increased computing power has contributed to higher share turnover, but “driven” seems rather strong.

I stick by my knee-jerk reaction, but after discussing it with a friend I think there’s something even less satisfactory going on. The chart showing these two inexorably rising lines uses a logarithmic scale. And the lines are actually pretty divergent from about 1970 onwards. What this means is that the rate of growth isn’t even the same. This is a very tenuous hook indeed on which to hang a conclusion of causality.

Categories
Uncertainty

Statistically speaking…

Numbers are often perceived as a sign of respectability. Press releases often include them — it seems so much more believable to say 75.4% of people do such-and-such than to say many or even most people. Quote a specific percentage and people tend to believe it.

The trouble is, the numbers we see in the press are often misleading or just plain wrong. Some recent sources of error include:

  • Journalists writing the story have not fully understood the press release, or the writers of the press release didn’t understand the original results. A common area of confusion is the significance of quoted results, and what that really means. There’s a really good Understanding Uncertainty blog on this. In summary:

Take Paul the Octopus, who correctly predicted 8 football results in a row, which is unlikely (probability 1/256), due to chance. Is it reasonable to say that these results are unlikely to be due to chance (in other words that Paul is psychic)? Of course not, and nobody said this at the time, even after this 2.5 sigma event. So why do they say it about the Higgs Boson?

  • The numbers being compared aren’t like for like. There’s a good Understanding Uncertainty blog on this one, too (it’s an excellent website!). The recent news that Brits are more obese than other Europeans is a case in point: first, the figures for most countries are for people aged 18 and over, but for the Brits (who are in fact, in this case, just the English) are for people aged 16 and over; and second, the data for most countries is based on asking people what they weigh and how tall they are, but the English data is based on actual measurements. And guess what? People don’t always tell the absolute truth when asked how heavy they are.
  • People, and possibly especially journalists, are really unwilling to believe that phenomena are due to chance rather than to causality. I’ve written about this before. For instance, all those stories in the press about such-and-such a local authority being a black spot for whatever health risk is top of the list on that day: often due simply to random variation. In brief, a smaller population is quite likely to have results relatively far from the mean. It’s very easy to over interpret results.

People aren’t always very good at understanding percentages, either, and in particular the difference between percentages and percentage points. And people are really bad at understanding probabilities and risks:

The trouble is, many of us struggle with understanding risk. I realised how tenuous my grasp of risk was when I noticed that 1 in 20 sounded a bigger risk to me, than 5 percent (yes, they’re exactly the same). Representing risk so that people can get a true understanding of it is an art as well as a science.

Which is why giving children lessons in gambling may not be a stupid idea.

There are many people out there doing their best to introduce some sanity into the world. The Understanding Uncertainty website is consistently interesting and well written (have I mentioned that before?), Ben Goldacre has lots of useful stuff, the Guardian’s datablog is just starting a series on statistics (the first article explains samples and how bias can skew results), and Straight Statistics is also well worth a look.

Categories
Uncertainty

How can you tell?

It’s well known that people are very keen to find causality in the world, and reluctant to accept that a lot of what goes on is just random. Those of us who’ve been educated properly know that correlation is not causation, but it’s sometimes difficult to put that into practice.

There are some common examples. People who have lucky mascots, or rituals they go through before special events, such as exams or sports matches they are taking part in (or watching, or teams they support are taking part in). To my mind, many business books that use the argument “XYZ Corp did really well under Pat as CEO, Pat has these character traits and behaviour patterns, so if you can develop the same traits and patterns your business will do really well too” are along the same lines. You need a very strong argument that causality is actually present before believing it.

It’s the argument against active fund management, too: if you take a large number managers, one of them will come out top over a year, or indeed over any period you like to mention, even if their performance isn’t affected by their skill. So saying that so-and-so has had consistently good results isn’t a very strong argument that they are actually better at it than anyone else, as opposed to luckier than most other people.

However, even if some fund managers are genuinely skilled, it’s possible for unskilled ones to mimic their performance. Tim Harford explains it well:

… it is possible for an unskilled fund manager to mimic a genuinely skilled one, in the same way that an insect might mimic a leaf, or a harmless creature mimic a poisonous one.

This mimicry, too, involves three steps: first, invest all your funds in whatever benchmark you need to beat, whether it’s treasury bills or a stock market index; second, make a bet that some unlikely event will not come to pass using the invested funds as security; finally, boast of benchmark beating returns, because you’ve delivered the benchmark plus the additional money from winning the bet. Collect your performance fee. (In the unlikely event that you lost the bet and with it all your investors’ cash, simply cough awkwardly and look at your shoes.)

It turns out that it’s impossible to tell the difference between this and a more conventional strategy just by looking at the investment returns.

These are the “black swans” made famous by Nassim Taleb: low probability, high-impact events, except that these particular swans are genetically engineered – deliberately manufactured and then hidden away, to escape at unwelcome moments.

Harford goes on to explain how these mimicking strategies can be used to game nearly all bonus schemes based solely on performance.

This suggests an obvious question: is this in fact a surer way of getting good investment performance than relying on skill anyway?

 

 

Categories
risk management Uncertainty

Confidence and causality

Ok, it’s a bit trite, but human behaviour is really important, and a good understanding of human behaviour is a goal for people in many different fields. Marketing, education and social policy all seek to influence our behaviour in different ways and for different purposes — that’s surely what the whole Nudge thing is all about, for a start. Economists have traditionally taken a pretty simplistic view: homo economicus seems to have a very narrow view of the utility function he (and it is often he) is trying to maximise.

Psychologists have known for some time that real life just isn’t that simple. Daniel Kahneman and Amos Tversky first published some of their work on how people make “irrational” economic choices in the early 1970s, and since then the idea of irrationality has been widely accepted. It’s now well known that we have many behavioural biases: the trouble is, what do we do with the knowledge? It’s difficult to incorporate it into economic or financial models (or indeed other behavioural models): it’s often possible to model one or two biases, but not the whole raft. Which means that models that rely, directly or indirectly, on assumptions about peoples’ behaviour can be spectacularly unreliable.

Kahneman, who won the 2002 Nobel Memorial Prize in Economics (Tversky died in 1996) has written in a recent article about the dangers of over confidence (it’s well worth a read). One thing that comes out of it for me is how much people want to be able to ascribe causality: saying that variations are just random variations, rather than being because of people’s skill at picking investments, or some environmental or social effect on bowel cancer, is not a common reaction, and indeed is often resisted.

It’s something we should think about when judging how much reliance to place on the results of our models. When I build a model, I naturally think I’ve done a good job, and I’m confident that  it’s useful. And if, in due course, it turns out to make reasonable predictions, I’m positive that it’s because of my skill in building it. But, just by chance, my model is likely to be right some of the time anyway. It may never be right again.

Categories
Actuarial Uncertainty

Getting rates in a mess

Another good blog post from Understanding Uncertainty: for once not based on a howler from the British press. Instead, it’s based on a howler from the German press – High suicide rate in German forces serving abroad – every fifth soldier takes his own life. They actually meant to say one in five of deaths among soldiers serving abroad.

The post goes on to discuss whether the suicide rate is higher or lower than would be expected, and along the way gives some explanations of the concepts behind exposed to risk (without actually using the term). A great example of how to explain something that can get pretty technical in an uncomplicated way. If only most actuaries could do the same.

Categories
Uncertainty

Critical thinking

A very good blog post in the Guardian by Jon Butterworth, discussing the faster-than-light result (or possible result). One thing that wasn’t at all clear from the mainstream press coverage was that the whole thing depends on probability distributions. It’s obvious when you think about it (it’s not as if you can label an individual neutrino and send it from one place to another, hundreds of miles away), but it does change the complexion of the whole thing. It also makes it rather more complex, of course, and as usual there is massive scope for disagreement about statistical techniques and whether everything has been allowed for.

I’m looking forward to the next chapters in what I’m sure will turn out to be a long story.

Categories
Uncertainty

Visualising uncertainty

Understanding uncertainty, David Spiegelhalter’s site, is an absolute must for anyone who reasons about or models uncertainty. David Spiegelhalter, Mike Pearson and Ian Short have just had a paper about Visualising uncertainty published by Science. If it’s anything like any of the content of the web site, it’s a must read.