Categories
Environment

Energy and economics

A somewhat frequent criticism of common economic theories and frameworks is that they are isolated from real world concerns such as energy and resource constraints: that the concept of limited resources, and ideas like the second law of thermodynamics, simply don’t seem to affect the economics at all. You come across this criticism primarily at what might be called the greener fringe, which means that it is rather poo-poohed by some.

Economic theory is, of course, just a model of the real world. It’s bound to simplify and abstract some aspects, and concentrate on others. And many economists don’t consider resource and environment issues to be of primary concern. But what might economics look like if it did take energy (and other resources) as fundamental to the model, rather than as extras which it would be nice to take into account? Gail Tverberg has an interesting piece in which she explains how she sees energy use as a primary driver of economic growth. It’s a good read, and makes a lot of sense. I’m looking forward to her follow-up piece which she says will talk about debt fits into the picture.

Categories
Interesting

Fourier and McCartney

Here’s a long but fascinating post on deciphering the opening chord in A Hard Day’s Night. Along the way it gives a good explanation of Fourier transforms for the non-mathematician.

It also gives a really good example of why it’s important to look at the overall reasonableness of a result, rather than blindly relying on the maths. Summary of this bit: somebody ran a Fourier analysis on the chord, assumed that the loudest frequencies were the fundamentals, and came up with the following notes:

  • Harrison (electric 12 string guitar): A2 A3, D3 D4, G3 G4, C4 C4
  • McCartney (electric bass guitar): D3
  • Lennon (acoustic 6 string guitar): C5
  • Martin (piano): D3 F3 D5 G5 E6

But why would Lennon play only one note? It was meant to be a dramatic opening, give it all you’ve got. Single notes just don’t cut it.

This observation led to a realisation that the assumption that the loudest frequencies were the fundamentals was flawed. Record producers often fiddle with the frequencies, and in this case it appears that the bass frequencies were turned down.

There’s lots of other interesting stuff in the post about how the author went about his detective work, and what really hits home to me is the variety of techniques he used. The moral of the story is to use all the information at your disposal, not just the maths.

Categories
Modelling

Implausible assumptions

Antonio Fatas has some interesting things to say about the reliance of economic models on implausible assumptions.

All models rely on assumptions and economic models are known (and made fun of) for relying on very strong assumptions about rationality, perfect information,… Many of these assumptions are unrealistic but they are justified as a way to set a benchmark model around which one is then allowed to model deviations from the assumptions.

The problem is, he says, that an unrealistically high standard of proof is set for departing from the benchmark. Essentially, it’s OK to use the benchmark model, which we all know is just plain wrong, but you need really strong evidence in order to support the use of other assumptions, even though they just might be right.

I suspect the problem is that it’s easy to differentiate the benchmark assumptions: they’re the ones that we wish were true, in a way, because they’ve got nice properties. All the other assumptions, that might actually be true, are messy, and why choose one set over another?

 

Categories
Interesting Software

Reinhart and Rogoff: was Excel the problem?

There’s a bit of a furore going on at the moment: it turns out that a controversial paper in the debate about the after-effects of the financial crisis had some peculiarities in its data analysis.

Rortybomb has a great description, and the FT’s Alphaville and Tyler Cowen have interesting comments.

In summary, back in 2010 Carmen Reinhart and Kenneth Rogoff published a paper Growth in a time of debt in which they claim that “median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower.” Reinhart and Rogoff didn’t release the data they used for their analysis. Since then, apparently, people have tried and failed to reproduce the analysis that gave this result.

Now, a paper has been released that does reproduce the result: Herndon, Ash and Pollin’s Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff,

Except that it doesn’t, really. Herndon, Ash and Pollin identify three issues with Reinhart and Rogoff’s analysis, which mean that the result is not quite what it seems at first glance. It’s all to do with the weighted average that R&R use for the growth rates.

First, there are data sets for 20 countries covering the period 1946-2009. R&R exclude data for three countries for the first few years. It turns out that those three countries had high debt levels and solid growth in the omitted periods. R&R didn’t explain these exclusions.

Second, the weights for the averaging aren’t straightforward (or, possibly, they are too straightforward). Rortybomb has a good explanation:

Reinhart-Rogoff divides country years into debt-to-GDP buckets. They then take the average real growth for each country within the buckets. So the growth rate of the 19 years that the U.K. is above 90 percent debt-to-GDP are averaged into one number. These country numbers are then averaged, equally by country, to calculate the average real GDP growth weight.

In case that didn’t make sense, let’s look at an example. The U.K. has 19 years (1946-1964) above 90 percent debt-to-GDP with an average 2.4 percent growth rate. New Zealand has one year in their sample above 90 percent debt-to-GDP with a growth rate of -7.6. These two numbers, 2.4 and -7.6 percent, are given equal weight in the final calculation, as they average the countries equally. Even though there are 19 times as many data points for the U.K.

Third, there was an Excel error in the averaging. A formula omits five rows. Again, Rortybomb has a good picture:

reinhart_rogoff_coding_error_0

Oops!

So, in summary, the weighted average omits some years, some countries, and isn’t weighted in the expected way. It doesn’t seem to me that any one of these is the odd man out, and I don’t think it really matters why either of the omissions occurred: in other words, I don’t think this is a major story about an Excel error.

I do think, though, that it’s an excellent example of something I’ve been worried about for some time: should you believe claims in published papers, when the claims are based on data analysis or modelling?

Let’s consider another, hypothetical, example. Someone’s modelled, say, the effects of differing capital levels on bank solvency in a financial crisis. There’s a beautifully argued paper, full of elaborate equations specifying interactions between this, that and the other. Everyone agrees that the equations are the bee’s knees, and appear to make sense. The paper presents results from running a model based on the equations. How do you know whether the model does actually implement all the spiffy equations correctly? By the way, I don’t think it makes any difference whether or not the papers are peer reviewed. It’s not my experience that peer reviewers check the code.

In most cases, you just can’t tell, and have to take the results on trust. This worries me. Excel errors are notorious. And there’s no reason to think that other models are error-free, either. I’m always finding bugs in people’s programs.

Transparency is really the only solution. Data should be made available, as should the source code of any models used. It’s not the full answer, of course, as there’s then the question of whether anyone has bothered to check the transparently provided information. And, if they have, what they can do to disseminate the results. Obviously for an influential paper like the R&R paper, any confirmation that the results are reproducible or otherwise is likely to be published itself, and enough people will be interested that the outcome will become widely known. But there’s no generally applicable way of doing it.

Categories
Actuarial Environment

Modelling isn’t just about money

Last autumn I was at an actuarial event, listening to a presentation on the risks involved in a major civil engineering project and how to price possible insurance covers. It must have been a GI (general insurance), event, obviously. That’s exactly the sort of thing GI actuaries do.

The next presentation discussed how to model how much buffer is needed to to bring the probability of going into deficit at any point in a set period below a specified limit. It sounded exactly like modelling capital requirements for an insurer.

But then the third presentation was on how to model the funding requirements for an entity independent of its sponsor, funded over forty to sixty years, paying out over the following twenty to thirty, with huge uncertainty about exactly when the payments will occur and how much they will actually be. It must be pensions, surely! A slightly odd actuarial event, to combine pensions and GI…

The final presentation made it seem even odder, if not positively unconventional: the role of sociology, ecology and systems thinking in modelling is not a mainstream actuarial topic by any means.

And it wasn’t a mainstream actuarial event. It had been put on by the professions Resource and Environment member interest group, and the topics of the presentations were actually carbon capture, modelling electricity supply and demand, funding the decommissioning of nuclear power stations, and insights from the Enterprise Risk Management member interest group’s work – all fascinating examples of how actuarial insight is being applied in new areas. And to me, fascinating examples of how the essence of modelling doesn’t depend nearly as much as you might think on what is actually being modelled.

Categories
Actuarial

Models and modellers

On 1 March I gave the Worshipful Company of Actuaries lecture at Heriot-Watt University. Here’s the abstract:

Being an actuary nowadays is all about modelling, and in this lecture I’ll discuss how we should go about it. We all know that all models are wrong but some are useful – what does this mean in practice? And what have sheep and elephants got to do with it? Along the way I’ll also consider some of the ways in which the actuarial profession is changing now and is likely to change in the future, and what you should do about it.

And here’s what I said.

Categories
risk management Uncertainty

Confidence and causality

Ok, it’s a bit trite, but human behaviour is really important, and a good understanding of human behaviour is a goal for people in many different fields. Marketing, education and social policy all seek to influence our behaviour in different ways and for different purposes — that’s surely what the whole Nudge thing is all about, for a start. Economists have traditionally taken a pretty simplistic view: homo economicus seems to have a very narrow view of the utility function he (and it is often he) is trying to maximise.

Psychologists have known for some time that real life just isn’t that simple. Daniel Kahneman and Amos Tversky first published some of their work on how people make “irrational” economic choices in the early 1970s, and since then the idea of irrationality has been widely accepted. It’s now well known that we have many behavioural biases: the trouble is, what do we do with the knowledge? It’s difficult to incorporate it into economic or financial models (or indeed other behavioural models): it’s often possible to model one or two biases, but not the whole raft. Which means that models that rely, directly or indirectly, on assumptions about peoples’ behaviour can be spectacularly unreliable.

Kahneman, who won the 2002 Nobel Memorial Prize in Economics (Tversky died in 1996) has written in a recent article about the dangers of over confidence (it’s well worth a read). One thing that comes out of it for me is how much people want to be able to ascribe causality: saying that variations are just random variations, rather than being because of people’s skill at picking investments, or some environmental or social effect on bowel cancer, is not a common reaction, and indeed is often resisted.

It’s something we should think about when judging how much reliance to place on the results of our models. When I build a model, I naturally think I’ve done a good job, and I’m confident that  it’s useful. And if, in due course, it turns out to make reasonable predictions, I’m positive that it’s because of my skill in building it. But, just by chance, my model is likely to be right some of the time anyway. It may never be right again.

Categories
Data risk management

Fiddling the figures: Benford reveals all

Well, some of it, anyway. There’s been quite a lot of coverage in on the web recently about Benford’s law and the Greek debt crisis.

As I’m sure you remember, Benford’s law says that in lists of numbers from many real life sources of data, the leading digit isn’t uniformly distributed. In fact, around 30% of leading digits are 1, while fewer than 5% are 9. The phenomenon has been known for some time, and is often used to detect possible fraud – if people are cooking the books, they don’t usually get the distributions right.

It’s been in the news because it turns out that the macroeconomic data reported by Greece shows the greatest deviation from Benford’s law among all euro states (hat tip Marginal Revolution).

There was also a possible result that the numbers in published accounts in the financial industry deviated more from Benford’s law now than they used to. But it now appears that the analysis may be faulty.

How else can Benford’s law be used? What about testing the results of stochastic modelling, for example? If the phenomena we are trying to model are ones for which Benford’s law works, then the results of the model should comply too.