Categories
Interesting

Does size matter?

Over the last few months there have been several interesting pieces about innovation, the size of companies, and other rather loosely connected topics.

Back in December, the Schumpeter column in the Economist reviewed an article arguing that large firms are often more innovative than small ones. This seems counter-intuitive – surely small companies are nimble and creative, and large ones are stultified by bureaucracy. But it turns out that there are good reasons why intuition may be wrong here, although Schumpeter thought that there are limits to the argument: large firms may be innovative, but it’s usually incremental innovation rather than disruptive innovation.

This all fits in quite well with an observation a friend and I made recently when discussing discrimination in employment. You might think that there will be less diversity in large, established companies than in small companies, especially high-tech startups. But in our experience the converse is true. We thought it was probably because large companies have better processes in place, are more conscious of the issues and are less likely to hire on the basis of existing friendships. Also, of course, they have larger workforces, so can reflect population diversity more easily than a small company in which every person constitutes say, 10% of the employees.

Conventional wisdom also has it that most job creation happens in small companies. Well, up to a point. Recent research has found that in fact once you control for the age of the firm there is no systematic relationship between firm size and employment growth, in the USA at least. Small firms tend to be younger — if they are older, then they are less successful (else they’d be bigger). (I can’t resist: repeat after me, correlation is not causation).

The Economist has a leader this week that discusses the contrast between large and small firms. It argues that, on the whole, economies relying more on small firms have been less successful than those with more large firms (think northern and southern Europe). Apparently productivity is much greater in large firms, at least partly because of economies of scale. Don’t have special regulatory and fiscal breaks for small firms, the leader argues, but go for growth instead.

 

Categories
Uncertainty

Statistically speaking…

Numbers are often perceived as a sign of respectability. Press releases often include them — it seems so much more believable to say 75.4% of people do such-and-such than to say many or even most people. Quote a specific percentage and people tend to believe it.

The trouble is, the numbers we see in the press are often misleading or just plain wrong. Some recent sources of error include:

  • Journalists writing the story have not fully understood the press release, or the writers of the press release didn’t understand the original results. A common area of confusion is the significance of quoted results, and what that really means. There’s a really good Understanding Uncertainty blog on this. In summary:

Take Paul the Octopus, who correctly predicted 8 football results in a row, which is unlikely (probability 1/256), due to chance. Is it reasonable to say that these results are unlikely to be due to chance (in other words that Paul is psychic)? Of course not, and nobody said this at the time, even after this 2.5 sigma event. So why do they say it about the Higgs Boson?

  • The numbers being compared aren’t like for like. There’s a good Understanding Uncertainty blog on this one, too (it’s an excellent website!). The recent news that Brits are more obese than other Europeans is a case in point: first, the figures for most countries are for people aged 18 and over, but for the Brits (who are in fact, in this case, just the English) are for people aged 16 and over; and second, the data for most countries is based on asking people what they weigh and how tall they are, but the English data is based on actual measurements. And guess what? People don’t always tell the absolute truth when asked how heavy they are.
  • People, and possibly especially journalists, are really unwilling to believe that phenomena are due to chance rather than to causality. I’ve written about this before. For instance, all those stories in the press about such-and-such a local authority being a black spot for whatever health risk is top of the list on that day: often due simply to random variation. In brief, a smaller population is quite likely to have results relatively far from the mean. It’s very easy to over interpret results.

People aren’t always very good at understanding percentages, either, and in particular the difference between percentages and percentage points. And people are really bad at understanding probabilities and risks:

The trouble is, many of us struggle with understanding risk. I realised how tenuous my grasp of risk was when I noticed that 1 in 20 sounded a bigger risk to me, than 5 percent (yes, they’re exactly the same). Representing risk so that people can get a true understanding of it is an art as well as a science.

Which is why giving children lessons in gambling may not be a stupid idea.

There are many people out there doing their best to introduce some sanity into the world. The Understanding Uncertainty website is consistently interesting and well written (have I mentioned that before?), Ben Goldacre has lots of useful stuff, the Guardian’s datablog is just starting a series on statistics (the first article explains samples and how bias can skew results), and Straight Statistics is also well worth a look.

Categories
Uncertainty

How can you tell?

It’s well known that people are very keen to find causality in the world, and reluctant to accept that a lot of what goes on is just random. Those of us who’ve been educated properly know that correlation is not causation, but it’s sometimes difficult to put that into practice.

There are some common examples. People who have lucky mascots, or rituals they go through before special events, such as exams or sports matches they are taking part in (or watching, or teams they support are taking part in). To my mind, many business books that use the argument “XYZ Corp did really well under Pat as CEO, Pat has these character traits and behaviour patterns, so if you can develop the same traits and patterns your business will do really well too” are along the same lines. You need a very strong argument that causality is actually present before believing it.

It’s the argument against active fund management, too: if you take a large number managers, one of them will come out top over a year, or indeed over any period you like to mention, even if their performance isn’t affected by their skill. So saying that so-and-so has had consistently good results isn’t a very strong argument that they are actually better at it than anyone else, as opposed to luckier than most other people.

However, even if some fund managers are genuinely skilled, it’s possible for unskilled ones to mimic their performance. Tim Harford explains it well:

… it is possible for an unskilled fund manager to mimic a genuinely skilled one, in the same way that an insect might mimic a leaf, or a harmless creature mimic a poisonous one.

This mimicry, too, involves three steps: first, invest all your funds in whatever benchmark you need to beat, whether it’s treasury bills or a stock market index; second, make a bet that some unlikely event will not come to pass using the invested funds as security; finally, boast of benchmark beating returns, because you’ve delivered the benchmark plus the additional money from winning the bet. Collect your performance fee. (In the unlikely event that you lost the bet and with it all your investors’ cash, simply cough awkwardly and look at your shoes.)

It turns out that it’s impossible to tell the difference between this and a more conventional strategy just by looking at the investment returns.

These are the “black swans” made famous by Nassim Taleb: low probability, high-impact events, except that these particular swans are genetically engineered – deliberately manufactured and then hidden away, to escape at unwelcome moments.

Harford goes on to explain how these mimicking strategies can be used to game nearly all bonus schemes based solely on performance.

This suggests an obvious question: is this in fact a surer way of getting good investment performance than relying on skill anyway?

 

 

Categories
risk management Uncertainty

Confidence and causality

Ok, it’s a bit trite, but human behaviour is really important, and a good understanding of human behaviour is a goal for people in many different fields. Marketing, education and social policy all seek to influence our behaviour in different ways and for different purposes — that’s surely what the whole Nudge thing is all about, for a start. Economists have traditionally taken a pretty simplistic view: homo economicus seems to have a very narrow view of the utility function he (and it is often he) is trying to maximise.

Psychologists have known for some time that real life just isn’t that simple. Daniel Kahneman and Amos Tversky first published some of their work on how people make “irrational” economic choices in the early 1970s, and since then the idea of irrationality has been widely accepted. It’s now well known that we have many behavioural biases: the trouble is, what do we do with the knowledge? It’s difficult to incorporate it into economic or financial models (or indeed other behavioural models): it’s often possible to model one or two biases, but not the whole raft. Which means that models that rely, directly or indirectly, on assumptions about peoples’ behaviour can be spectacularly unreliable.

Kahneman, who won the 2002 Nobel Memorial Prize in Economics (Tversky died in 1996) has written in a recent article about the dangers of over confidence (it’s well worth a read). One thing that comes out of it for me is how much people want to be able to ascribe causality: saying that variations are just random variations, rather than being because of people’s skill at picking investments, or some environmental or social effect on bowel cancer, is not a common reaction, and indeed is often resisted.

It’s something we should think about when judging how much reliance to place on the results of our models. When I build a model, I naturally think I’ve done a good job, and I’m confident that  it’s useful. And if, in due course, it turns out to make reasonable predictions, I’m positive that it’s because of my skill in building it. But, just by chance, my model is likely to be right some of the time anyway. It may never be right again.