Modelling risk management

The limits of models

John Kay has an excellent piece in the FT about the limits of risk models. As he points out, they are no use for risks that they don’t model. In particular, if they are modelling financial risks, and are calibrated using recent data, they are no use at all for modelling major phase changes.

For example, a risk model for the Swiss franc that was calibrated using daily volatility data from the last few years was pretty useless when the franc was un-pegged from the euro. Such a model made a basic, probably implicit assumption that the peg was in place. It was therefore not modelling all the risks associated with the franc.

As Kay puts it,

The Swiss franc was pegged to the euro from 2011 to January 2015. Shorting the Swiss currency during that period was the epitome of what I call a “tailgating strategy”, from my experience of driving on European motorways. Tailgating strategies return regular small profits with a low probability of substantial loss. While no one can predict when a tailgating motorist will crash, any perceptive observer knows that such a crash is one day likely.



Spreadsheet risk strikes again

I just can’t resist this one. Vista Equity Partners is paying around $100 million less than expected for Tibco Software Inc because Goldman Sachs got the number of shares wrong in the spreadsheet that did all the calculations. OK, $100 million isn’t much in the context of a $4 billion deal, but it’s an awful lot of money in any other context. But only just over twice Goldman’s fees to Tibco for the transaction. It’s not clear how the mistake arose.

Modelling Software

All models are wrong…

… but some are more wrong than others. It’s emerged that a risk calculator for cholesterol-related heart disease risk is giving some highly dubious results. So completely healthy people could start taking unnecessary drugs. It’s not clear if the problem is in the specification or the implementation: but either way, the results seem rather dubious.

The answer was that the calculator overpredicted risk by 75 to 150 percent, depending on the population. A man whose risk was 4 percent, for example, might show up as having an 8 percent risk. With a 4 percent risk, he would not warrant treatment — the guidelines that say treatment is advised for those with at least a 7.5 percent risk and that treatment can be considered for those whose risk is 5 percent.

According to the New York Times (may be gated), questions were raised a year ago, before the calculator was released, but somehow the concerns weren’t passed on to the right people. It’s difficult to tell from a press article, but it appears as if those responsible for the calculator are reacting extremely defensively, and not really admitting that there’s anything wrong with the model.

 while the calculator was not perfect, it was a major step forward, and that the guidelines already say patients and doctors should discuss treatment options rather than blindly follow a calculator

Of course they’re right, in that you should never believe a model to the exclusion of all other evidence, but it’s very difficult for non-experts not to. Somehow, something coming out of a computer always seems more reliable than it actually is.

risk management Software

Unlikely but just about plausible

I’ve been recently been working with the Centre for Risk Studies in Cambridge on some extreme scenarios: one-in-200, or even less likely events. It’s been an interesting challenge, not least because it’s very difficult to make things extreme enough. We find ourselves saying that the step in the scenario that we’re working would never actually happen, because not everything would go wrong in the right way. But of course that’s just the point: we’re looking at swiss cheese situations.

A couple of times we’ve dreamt up something that we thought was really unlikely, only for something remarkably similar to turn up in the news. We came up with the idea that data could be irretrievably corrupted, and a few days later found ourselves reading about a Xerox copier that irretrievably corrupted the images.

So I was really interested to read a story about a security researcher who’s apparently found a really nasty piece of malware — except it’s not clear if he’s making the whole thing up.

People following this story fall into a few different camps. Many believe everything he says — or at least most of it — is true. Others think he’s perpetrating a huge social engineering experiment, to see what he can get the world and the media to swallow. A third camp believes he’s well-intentioned, but misguided due to security paranoia nurtured through the years.

The thing is, the individual pieces of the scenario are all just about possible. But is it possible for them all to happen in a connected way? For the holes in the swiss cheese to line up?

The absolutely amazing thing about this story is that nearly everything Ruiu reveals is possible, even the more unbelievable details. Ruiu has also been willing to share what forensic evidence he has with the public (you can download some of the data yourself) and specialized computer security experts.

Where developments start getting preposterous, no matter how much leeway you give him, is how many of the claims are unbelievable (not one, not two, but all of them) and why much of the purported evidence is supposedly modified by the bad guys after he releases it, thus eliminating the evidence. The bad guys (whoever they are) are not only master malware creators, but they can reach into Ruiu’s public websites and remove evidence within images after he has posted it. Or the evidence erases itself as he’s copying it for further distribution.

Again, this would normally be the final straw of disbelief, but if the malware is as devious as described and does exist, who’s to say the bad guys don’t have complete control of everything he’s posting? If you accept all that Ruiu is saying, there’s nothing to prove it hasn’t happened.

I don’t know. I haven’t looked into the details at all, and probably wouldn’t understand them even if I did. But there’s certainly a lesson here for those of us developing unlikely scenarios: it’s difficult to make things up that are more unlikely than things that actually happen.

Governance risk management

Challenge is difficult

We all know that feeling: people are talking about something as if they expect you know what it is, or understand it, and you’re going to look really stupid if you admit ignorance. It’s a common phenomenon in all sorts of fields, not least when technical matters are concerned, as discussed in this article.

Not long before, I had started noticing a habit I had, a tendency to nod or make vague assentive noises when people around me talked about things I’d never heard of.

When I did this, my motivation wasn’t to claim knowledge I didn’t have as much as to deflect a need for outright admission of ignorance. I’d let the moment glide past and later scamper off to furtively study up.

Farnam Street points out that this can cause problems in organisations:

In group settings, this has lead to what psychologists call ‘pluralistic ignorance,’a psychological state characterized by the belief that one’s private thoughts are different from those of others. This causes huge problems in organizations.

Consider an example. You’re in a large meeting with the senior management of your organization to discuss an initiative that spans across the organization and involves everyone in the room. You hear words come out, someone may even ask you, do you follow? And yes, of course you follow — you don’t want to be the only person in the room without a clue.

… So you walk out of the room wondering what you just agreed to do. You have no idea. Your stress goes up, you run around asking others, and quickly discover they are just as confused as you are.

It sounds bad, sure, but it’s even worse if you’re there in order to provide challenge. Challenge is difficult, but it’s really important: in fact it’s the primary purpose of many committees, up to and including the Board level.

risk management

Low tech risks

High tech risks are out there, and are potentially serious, but low tech risks don’t go away, and may be just as serious.

For example, we learned recently that Edward Snowden managed to get hold of peoples’ user ids and passwords, giving him unauthorised access to some of the classified information that he then leaked.

I’ve worked in several organisations where it was standard practice for sysadmins to ask me for my password when they needed to fix a problem on my machine. I would always complain, but there was little I could about it: especially as in one case, you weren’t allowed to change your password twice within (say) three days. And there are any number of websites that first insist that you register with them in order to make full use of the site, then confirm your password by email after you’ve registered and, sometimes, whenever you change it. They are getting to be less common, but they still exist.

And then you get the problem of your bank ringing you up out of the blue, and asking you to confirm your identity. No, sorry, I don’t give out personal information over the phone to unknown callers.

It’s difficult enough to keep track of passwords without reusing them. I have a reasonably simple scheme, based on a standard stem with additions based on the site address, but some organisations insist on a rather longer password than I usually use, or require some special characters, or forbid the use of others. It’s especially annoying that the most fussy sites seem to be ones that aren’t particularly sensitive, in that they don’t have any personal information.

So I can’t rely just on my memory and use LastPass to record passwords, memorable phrases, dates, and answers to all those security questions that don’t actually have obvious answers.

In general, it seems to me that there are still too many organisations that don’t follow good practice, and require risky behaviour from users. Things don’t seem to change much: I’ve written about this before.

Banking risk management

The Bankers’ New Clothes

Way back in July I wrote a review of The Banker’s New Clothes: What’s wrong with banking and what to do about it by Anat Admati & Martin Hellwig for The Actuary magazine. The review has finally been published, but it’s hard to find on the website and doesn’t seem to have a permalink yet. So here it is.

It’s fairly obvious from the title of this book that it’s not going to be full of praise for bankers. I don’t think it’s giving anything away to say that that’s an understatement: a theme running throughout the book is that bankers mislead the rest of us about what’s involved in banking and about basic financial concepts. But the main message is even simpler than that – if banks were more highly capitalised, they would be less likely to fail, and, importantly, there’s no good reason why they shouldn’t be. It’s not a subtle message on the face of it, and you might wonder whether it’s enough for a whole book. It is, for two reasons.

First, a surprisingly large part of the book is taken up with a very clear explanation of what capital is and how it works. There’s a simple running example – Kate, who borrows money in order to buy a house – which is gradually elaborated to illustrate a range of different concepts, including leverage, guarantees, and return on equity. This performs the useful function of bringing us back to basics: capital (or equity) is the excess of assets over liabilities, it really is as simple as that. Well, only slightly more complicated – it’s what the excess of assets over liabilities would be if the accounts were realistic. The book does a reasonable job of pointing out that different accounting treatments can radically affect the answer.

Second, there’s a lot of emphasis on debunking the argument that banks are different, and that the usual rules don’t apply to them. This is where the book’s title comes from. It’s this second aspect that is, in my view, the really important part of the book; it’s also, unfortunately, weaker than it could have been.

The authors take the overall line that banks aren’t really different from any other company. High levels of leverage may produce high rates of return, but they also mean higher risk. More capital means a bigger buffer. It’s really not rocket science. The problem is that the villains of this piece, the bankers, dress it up as if it is rocket science, and obfuscate what’s going on as they do so. For example, the authors point out that bankers talk about holding capital: “… Apple and Wal-Mart are not said to ‘hold’ their equity. This is not a silly quibble about words. The language confusion creates mental confusion about what capital does and does not do.”

This is, on the whole, an effective line of argument, and has strong resonances. It would be even more effective if the authors took some of the bankers’ positions a bit more seriously, and went into more detail about why they think that way rather than simply pointing out how misleading some of their statements are. After all, at least some bankers are both smart and honest, so presumably there’s some intellectual backing to their reasoning. It would have been good for that to be exposed and subjected to rigorous analysis. But the book’s emphasis on the basic similarity of banks to other enterprises is important. So often, in many different contexts, we hear the cry from vested interests “but you don’t understand, we’re different because…” Nearly always this precedes a plea for special treatment – not only are they different, but different in a way that makes life harder. We should distrust this line of argument. Exceptional treatment requires exceptional justification. Exceptions introduce complexity, which results in unintended consequences. I’m not saying that exceptional treatment is never the right thing, but if the whole edifice becomes so intricate that it’s incredibly difficult to explain to a slightly sceptical intelligent lay person there’s a definite suspicion that it’s built on sand.

There’s another way of looking at this. Banks are indeed a special case: they are a major source of systemic risk in the global financial system. They should therefore be at least as safe as non-banks, and should be capitalised at least as highly. This isn’t an argument that’s often heard from within the banking community, or indeed elsewhere, which is why this book is important. Even with the recent proposals for tougher capital requirements on banks are we aren’t seeing proposals for capital levels of 20% – 30% of total assets, which is what the book recommends. Now that’s something to think about.

I don’t think I’d write the review very differently if I was writing it now, though I might refer to some of the evidence that backs the view that raw leverage is important, and risk-weighted capital is not. For instance, just look at charts 3 and 5 in this speech by Andrew Haldane of the Bank of England.



risk management Uncategorized

The big guys don’t always know what they’re doing

You’d think that a really big software company, like Adobe, would know what it’s doing But no. You may have noticed that there was a big data breach: millions of usernames and (encrypted) passwords were stolen. But they were encrypted, so no big deal, right?

Ah. Well. That’s the point. As this article explains, it was indeed the encrypted passwords that were stolen, not the hashes (if this is gobbledygook to you, the article has a very clear explanation of what this means). As the password hints were stolen too, it turns out to be really easy to decrypt many of them.

Now, I am by no means a security expert. And for websites I build nowadays, I use a ready-rolled solution (usually WordPress). But when I wrote things from scratch, even I knew better than to store the encrypted passwords. I may not have used the most secure hacking algorithm, or proper salting, but I didn’t encrypt the passwords.

(HT Bruce Shneier)

risk management Software

Swiss cheese

Why do things go wrong? Sometimes, it’s a whole combination of factors. Felix Salmon has some good examples, and reminded me of one of my favourite metaphors: the Swiss cheese model of accident causation.

In the Swiss Cheese model, an organization’s defenses against failure are modeled as a series of barriers, represented as slices of cheese. The holes in the slices represent weaknesses in individual parts of the system and are continually varying in size and position across the slices. The system produces failures when a hole in each slice momentarily aligns, permitting (in Reason’s words) “a trajectory of accident opportunity”, so that a hazard passes through holes in all of the slices, leading to a failure.

It’s a lovely vision, those little accidents waiting to happen, wriggling through the slices of cheese. But as Salmon points out

… it’s important to try to prevent failures by adding extra layers of Swiss cheese, and by assiduously trying to minimize the size of the holes in any given layer. But as IT systems grow in size and complexity, they will fail in increasingly unpredictable and catastrophic ways. No amount of post-mortem analysis, from Congress or the SEC or anybody else, will have any real ability to stop those catastrophic failures from happening. What’s more, it’s futile to expect that we can somehow design these systems to “fail well” and thereby lessen the chances of even worse failures in the future.

Which reminds me of Tony Hoare‘s comment on complexity and reliability

There are two ways of constructing a software design: One way is to make it so simple that there are obviously no deficiencies, and the other way is to make it so complicated that there are no obvious deficiencies. The first method is far more difficult.



The right tool?

The next time you notice something being done in Excel where you work, take a moment to question whether it’s the right tool for the job, or whether you or someone in your organisation is a tool for allowing its use.

No, not my words, but from the FT’s consistently excellent Alphaville blog. The point is, it’s easy to use Excel. But it’s very hard to use Excel well.

There are many people out there who can use Excel to solve a problem. They knock up a spreadsheet with a few clicks of the mouse, some dragging and dropping, a few whizzo functions, some spiffy charts, and it all looks really slick. But what if anything needs to be changed? Sensitivity testing? And how do you know you got it right in the first place? Building spreadsheets is an area in which people are overconfident of their abilities, and tend to think that nothing can go wrong.

Instead of automatically reaching for the mouse, why not Stop Clicking, Start Typing?

But we won’t. There’s a huge tendency to avoid learning new things, and everyone thinks they know how to use Excel. The trouble is, they know how to use it for displaying data (mostly), and don’t realise that what they are really doing is computer programming. A friend of mine works with a bunch of biologists, and says

I spend most of my time doing statistical analyses and developing new statistical methods in R. Then the biologists stomp all over it with Excel, trash their own data, get horribly confused, and complain that “they’re not programmers” so they won’t use R.

But that’s the problem. They are programmers, whether they like it or not.

Personally, I don’t think that things will change. We’ll keep on using Excel, there will keep on being major errors in what we do, and we’ll continue to throw up our hands in horror. But it’s rather depressing — it’s so damned inefficient, if nothing else.