The Bankers’ New Clothes

Way back in July I wrote a review of The Banker’s New Clothes: What’s wrong with banking and what to do about it by Anat Admati & Martin Hellwig for The Actuary magazine. The review has finally been published, but it’s hard to find on the website and doesn’t seem to have a permalink yet. So here it is.

It’s fairly obvious from the title of this book that it’s not going to be full of praise for bankers. I don’t think it’s giving anything away to say that that’s an understatement: a theme running throughout the book is that bankers mislead the rest of us about what’s involved in banking and about basic financial concepts. But the main message is even simpler than that – if banks were more highly capitalised, they would be less likely to fail, and, importantly, there’s no good reason why they shouldn’t be. It’s not a subtle message on the face of it, and you might wonder whether it’s enough for a whole book. It is, for two reasons.

First, a surprisingly large part of the book is taken up with a very clear explanation of what capital is and how it works. There’s a simple running example – Kate, who borrows money in order to buy a house – which is gradually elaborated to illustrate a range of different concepts, including leverage, guarantees, and return on equity. This performs the useful function of bringing us back to basics: capital (or equity) is the excess of assets over liabilities, it really is as simple as that. Well, only slightly more complicated – it’s what the excess of assets over liabilities would be if the accounts were realistic. The book does a reasonable job of pointing out that different accounting treatments can radically affect the answer.

Second, there’s a lot of emphasis on debunking the argument that banks are different, and that the usual rules don’t apply to them. This is where the book’s title comes from. It’s this second aspect that is, in my view, the really important part of the book; it’s also, unfortunately, weaker than it could have been.

The authors take the overall line that banks aren’t really different from any other company. High levels of leverage may produce high rates of return, but they also mean higher risk. More capital means a bigger buffer. It’s really not rocket science. The problem is that the villains of this piece, the bankers, dress it up as if it is rocket science, and obfuscate what’s going on as they do so. For example, the authors point out that bankers talk about holding capital: “… Apple and Wal-Mart are not said to ‘hold’ their equity. This is not a silly quibble about words. The language confusion creates mental confusion about what capital does and does not do.”

This is, on the whole, an effective line of argument, and has strong resonances. It would be even more effective if the authors took some of the bankers’ positions a bit more seriously, and went into more detail about why they think that way rather than simply pointing out how misleading some of their statements are. After all, at least some bankers are both smart and honest, so presumably there’s some intellectual backing to their reasoning. It would have been good for that to be exposed and subjected to rigorous analysis. But the book’s emphasis on the basic similarity of banks to other enterprises is important. So often, in many different contexts, we hear the cry from vested interests “but you don’t understand, we’re different because…” Nearly always this precedes a plea for special treatment – not only are they different, but different in a way that makes life harder. We should distrust this line of argument. Exceptional treatment requires exceptional justification. Exceptions introduce complexity, which results in unintended consequences. I’m not saying that exceptional treatment is never the right thing, but if the whole edifice becomes so intricate that it’s incredibly difficult to explain to a slightly sceptical intelligent lay person there’s a definite suspicion that it’s built on sand.

There’s another way of looking at this. Banks are indeed a special case: they are a major source of systemic risk in the global financial system. They should therefore be at least as safe as non-banks, and should be capitalised at least as highly. This isn’t an argument that’s often heard from within the banking community, or indeed elsewhere, which is why this book is important. Even with the recent proposals for tougher capital requirements on banks are we aren’t seeing proposals for capital levels of 20% – 30% of total assets, which is what the book recommends. Now that’s something to think about.

I don’t think I’d write the review very differently if I was writing it now, though I might refer to some of the evidence that backs the view that raw leverage is important, and risk-weighted capital is not. For instance, just look at charts 3 and 5 in this speech by Andrew Haldane of the Bank of England.

 

 

It’s the economics, stupid

We often hear that Big Pharma doesn’t develop drugs that treat diseases generally found in the less well off parts of the world, because there’s no money in it. We also hear that antibiotic-resistant superbugs are on the rampage, and that there’s no hope of outwitting them.

OK, what we actually hear is that bacteria are developing resistance to all the known antibiotics. What we don’t usually hear is that the reason that new antibiotics aren’t being discovered is that there’s no money in developing new ones (or not as much money as in other drugs, anyway).

For many years, the problem of resistance remained hypothetical because drug companies were consistently introducing new types of antibiotic. But antibiotics are a one-and-done treatment, and are far less profitable for Big Pharma than drugs taken daily to treat chronic conditions and, preferably, have no competition to keep costs down. … We’re in an evolutionary war with bacteria, but one of our best defenses pulled out due to economic incentives …

There’s more in this fascinating article. HT Izabella Kaminska.

The big guys don’t always know what they’re doing

You’d think that a really big software company, like Adobe, would know what it’s doing But no. You may have noticed that there was a big data breach: millions of usernames and (encrypted) passwords were stolen. But they were encrypted, so no big deal, right?

Ah. Well. That’s the point. As this article explains, it was indeed the encrypted passwords that were stolen, not the hashes (if this is gobbledygook to you, the article has a very clear explanation of what this means). As the password hints were stolen too, it turns out to be really easy to decrypt many of them.

Now, I am by no means a security expert. And for websites I build nowadays, I use a ready-rolled solution (usually WordPress). But when I wrote things from scratch, even I knew better than to store the encrypted passwords. I may not have used the most secure hacking algorithm, or proper salting, but I didn’t encrypt the passwords.

(HT Bruce Shneier)

Modern technology

What’s the one piece of technology that no high tech company is without?

Whiteboards. To my mind, a huge improvement over their low tech predecessor, the blackboard, but some disagree. They prefer the tactile feel of chalk: I hate the dust, and even thinking about the scraping noise of back of the blackboard rubber against the board sets my teeth on edge.

The dominance of whiteboards rings true, though. I don’t think I’ve ever worked anywhere with enough whiteboards in the meeting rooms (or enough meeting rooms, where it’s been open plan offices, come to that).

Fighting link rot

We’ve all come across signs of link rot, the phenomenon by which material we link to on the distributed Web vanishes or changes beyond recognition over time. In all cases it’s annoying to follow a link to a page that no longer exists, but sometimes it really matters.

Jonathan Zittrain writes:

We found that half of the links in all Supreme Court opinions no longer work.  And more than 70% of the links in such journals as the Harvard Law Review (in that case measured from 1999 to 2012), currently don’t work.

That’s bad. But they’re doing something about it. The Harvard Law School Library, together with around 30 other libraries around the world, has started perma.cc, which will provide a facility for permanent archiving and links. It’s a great idea. I think it’s good that it’s curated: permanent links can only be created by authorised users, who are typically scholarly journals.

There are other efforts to fight link rot, too, with different priorities.

Swiss cheese

Why do things go wrong? Sometimes, it’s a whole combination of factors. Felix Salmon has some good examples, and reminded me of one of my favourite metaphors: the Swiss cheese model of accident causation.

In the Swiss Cheese model, an organization’s defenses against failure are modeled as a series of barriers, represented as slices of cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting (in Reason’s words) “a trajectory of accident opportunity”, so that a hazard passes through holes in all of the slices, leading to a failure.

It’s a lovely vision, those little accidents waiting to happen, wriggling through the slices of cheese. But as Salmon points out

… it’s important to try to prevent failures by adding extra layers of Swiss cheese, and by assiduously trying to minimize the size of the holes in any given layer. But as IT systems grow in size and complexity, they will fail in increasingly unpredictable and catastrophic ways. No amount of post-mortem analysis, from Congress or the SEC or anybody else, will have any real ability to stop those catastrophic failures from happening. What’s more, it’s futile to expect that we can somehow design these systems to “fail well” and thereby lessen the chances of even worse failures in the future.

Which reminds me of Tony Hoare‘s comment on complexity and reliability

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.

 

Implausible assumptions

Antonio Fatas has some interesting things to say about the reliance of economic models on implausible assumptions.

All models rely on assumptions and economic models are known (and made fun of) for relying on very strong assumptions about rationality, perfect information,… Many of these assumptions are unrealistic but they are justified as a way to set a benchmark model around which one is then allowed to model deviations from the assumptions.

The problem is, he says, that an unrealistically high standard of proof is set for departing from the benchmark. Essentially, it’s OK to use the benchmark model, which we all know is just plain wrong, but you need really strong evidence in order to support the use of other assumptions, even though they just might be right.

I suspect the problem is that it’s easy to differentiate the benchmark assumptions: they’re the ones that we wish were true, in a way, because they’ve got nice properties. All the other assumptions, that might actually be true, are messy, and why choose one set over another?

 

The right tool?

The next time you notice something being done in Excel where you work, take a moment to question whether it’s the right tool for the job, or whether you or someone in your organisation is a tool for allowing its use.

No, not my words, but from the FT’s consistently excellent Alphaville blog. The point is, it’s easy to use Excel. But it’s very hard to use Excel well.

There are many people out there who can use Excel to solve a problem. They knock up a spreadsheet with a few clicks of the mouse, some dragging and dropping, a few whizzo functions, some spiffy charts, and it all looks really slick. But what if anything needs to be changed? Sensitivity testing? And how do you know you got it right in the first place? Building spreadsheets is an area in which people are overconfident of their abilities, and tend to think that nothing can go wrong.

Instead of automatically reaching for the mouse, why not Stop Clicking, Start Typing?

But we won’t. There’s a huge tendency to avoid learning new things, and everyone thinks they know how to use Excel. The trouble is, they know how to use it for displaying data (mostly), and don’t realise that what they are really doing is computer programming. A friend of mine works with a bunch of biologists, and says

I spend most of my time doing statistical analyses and developing new statistical methods in R. Then the biologists stomp all over it with Excel, trash their own data, get horribly confused, and complain that “they’re not programmers” so they won’t use R.

But that’s the problem. They are programmers, whether they like it or not.

Personally, I don’t think that things will change. We’ll keep on using Excel, there will keep on being major errors in what we do, and we’ll continue to throw up our hands in horror. But it’s rather depressing — it’s so damned inefficient, if nothing else.

 

Reinhart and Rogoff: was Excel the problem?

There’s a bit of a furore going on at the moment: it turns out that a controversial paper in the debate about the after-effects of the financial crisis had some peculiarities in its data analysis.

Rortybomb has a great description, and the FT’s Alphaville and Tyler Cowen have interesting comments.

In summary, back in 2010 Carmen Reinhart and Kenneth Rogoff published a paper Growth in a time of debt in which they claim that “median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower.” Reinhart and Rogoff didn’t release the data they used for their analysis. Since then, apparently, people have tried and failed to reproduce the analysis that gave this result.

Now, a paper has been released that does reproduce the result: Herndon, Ash and Pollin’s Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff,

Except that it doesn’t, really. Herndon, Ash and Pollin identify three issues with Reinhart and Rogoff’s analysis, which mean that the result is not quite what it seems at first glance. It’s all to do with the weighted average that R&R use for the growth rates.

First, there are data sets for 20 countries covering the period 1946-2009. R&R exclude data for three countries for the first few years. It turns out that those three countries had high debt levels and solid growth in the omitted periods. R&R didn’t explain these exclusions.

Second, the weights for the averaging aren’t straightforward (or, possibly, they are too straightforward). Rortybomb has a good explanation:

Reinhart-Rogoff divides country years into debt-to-GDP buckets. They then take the average real growth for each country within the buckets. So the growth rate of the 19 years that the U.K. is above 90 percent debt-to-GDP are averaged into one number. These country numbers are then averaged, equally by country, to calculate the average real GDP growth weight.

In case that didn’t make sense, let’s look at an example. The U.K. has 19 years (1946-1964) above 90 percent debt-to-GDP with an average 2.4 percent growth rate. New Zealand has one year in their sample above 90 percent debt-to-GDP with a growth rate of -7.6. These two numbers, 2.4 and -7.6 percent, are given equal weight in the final calculation, as they average the countries equally. Even though there are 19 times as many data points for the U.K.

Third, there was an Excel error in the averaging. A formula omits five rows. Again, Rortybomb has a good picture:

reinhart_rogoff_coding_error_0

Oops!

So, in summary, the weighted average omits some years, some countries, and isn’t weighted in the expected way. It doesn’t seem to me that any one of these is the odd man out, and I don’t think it really matters why either of the omissions occurred: in other words, I don’t think this is a major story about an Excel error.

I do think, though, that it’s an excellent example of something I’ve been worried about for some time: should you believe claims in published papers, when the claims are based on data analysis or modelling?

Let’s consider another, hypothetical, example. Someone’s modelled, say, the effects of differing capital levels on bank solvency in a financial crisis. There’s a beautifully argued paper, full of elaborate equations specifying interactions between this, that and the other. Everyone agrees that the equations are the bee’s knees, and appear to make sense. The paper presents results from running a model based on the equations. How do you know whether the model does actually implement all the spiffy equations correctly? By the way, I don’t think it makes any difference whether or not the papers are peer reviewed. It’s not my experience that peer reviewers check the code.

In most cases, you just can’t tell, and have to take the results on trust. This worries me. Excel errors are notorious. And there’s no reason to think that other models are error-free, either. I’m always finding bugs in people’s programs.

Transparency is really the only solution. Data should be made available, as should the source code of any models used. It’s not the full answer, of course, as there’s then the question of whether anyone has bothered to check the transparently provided information. And, if they have, what they can do to disseminate the results. Obviously for an influential paper like the R&R paper, any confirmation that the results are reproducible or otherwise is likely to be published itself, and enough people will be interested that the outcome will become widely known. But there’s no generally applicable way of doing it.

Modelling isn’t just about money

Last autumn I was at an actuarial event, listening to a presentation on the risks involved in a major civil engineering project and how to price possible insurance covers. It must have been a GI (general insurance), event, obviously. That’s exactly the sort of thing GI actuaries do.

The next presentation discussed how to model how much buffer is needed to to bring the probability of going into deficit at any point in a set period below a specified limit. It sounded exactly like modelling capital requirements for an insurer.

But then the third presentation was on how to model the funding requirements for an entity independent of its sponsor, funded over forty to sixty years, paying out over the following twenty to thirty, with huge uncertainty about exactly when the payments will occur and how much they will actually be. It must be pensions, surely! A slightly odd actuarial event, to combine pensions and GI…

The final presentation made it seem even odder, if not positively unconventional: the role of sociology, ecology and systems thinking in modelling is not a mainstream actuarial topic by any means.

And it wasn’t a mainstream actuarial event. It had been put on by the professions Resource and Environment member interest group, and the topics of the presentations were actually carbon capture, modelling electricity supply and demand, funding the decommissioning of nuclear power stations, and insights from the Enterprise Risk Management member interest group’s work – all fascinating examples of how actuarial insight is being applied in new areas. And to me, fascinating examples of how the essence of modelling doesn’t depend nearly as much as you might think on what is actually being modelled.