Categories
Demographics Modelling

Second hand water

Randall Munroe, over at xkcd, does some really funny cartoons, and is a physicist by training. Every week he answers a hypothetical question: this week it’s

How much of the Earth’s currently-existing water has ever been turned into a soft drink at some point in its history?

His answer, by the way, is “not much”. On the other hand, almost all of it has been drunk by a dinosaur.

One of the things I really like about these “what-if” answers is the way they demonstrate one of the important aspects of modelling: working out what’s significant and what’s not. And significance depends very much on what the purpose of the model is. Often, Munroe can make some really sweeping assumptions that are clearly not borne out in practice, but are equally clearly the right approximations to make for his purposes. And sometimes he says that he doesn’t know what assumption to make.

An example of a sweeping assumption comes in the answer to

How close would you have to be to a supernova to get a lethal dose of neutrino radiation?

where he assumes that you’re not going to get killed by being incinerated or vaporised. 

And in answering the question

When, if ever, will Facebook contain more profiles of dead people than of living ones?

the difficult assumption is whether Facebook is a flash in the pan, and stops adding new users, or whether it will become part of the infrastructure, and continue adding new users for ever (or at least for 50 or 60 years). There are also some sweeping demographic assumptions, of course.

(and while you’re at it, read the one on stirring tea)

I’m reminded of two things here. The first is doing mechanics problems in A-level maths: there was nothing difficult about the maths involved, the trick was all in recognising the type of problem. Was it a weightless, inelastic string, or a frictionless surface? It was all about building a really simple model.

The second is those Google interview questions we used to hear so much about, like how many golf balls fit in a school bus, or how many piano tuners there are in the world. The trick with these is to come up with a really simple model and then make reasonable guesses for the assumptions. And, of course, be aware of your model’s limitations.

Categories
Modelling

Implausible assumptions

Antonio Fatas has some interesting things to say about the reliance of economic models on implausible assumptions.

All models rely on assumptions and economic models are known (and made fun of) for relying on very strong assumptions about rationality, perfect information,… Many of these assumptions are unrealistic but they are justified as a way to set a benchmark model around which one is then allowed to model deviations from the assumptions.

The problem is, he says, that an unrealistically high standard of proof is set for departing from the benchmark. Essentially, it’s OK to use the benchmark model, which we all know is just plain wrong, but you need really strong evidence in order to support the use of other assumptions, even though they just might be right.

I suspect the problem is that it’s easy to differentiate the benchmark assumptions: they’re the ones that we wish were true, in a way, because they’ve got nice properties. All the other assumptions, that might actually be true, are messy, and why choose one set over another?

 

Categories
Old site

Justify your results

At the recent GIRO conference, Rob Curtis from the FSA drew our attention to the recent consultation paper: CP06/16: Prudential changes for insurers. The part that made me prick up my ears was the following:

The written record of a firm’s individual capital assessment, as carried out in accordance with Sub-Principle 1 submitted by the firm to the FSA must:

  1. in relation to the assessment comparable to a 99.5% probability over a one year timeframe that the value of assets exceeds the value of liabilities, document the reasoning and judgements underlying that assessment and, in particular, justify:
  2. (a) the assumptions used;
    (b) the appropriateness of the methodology used; and
    (c) the results of the assessment.

  3. identify the major differences between that assessment and any other assessments carried out by the firm using a different probability measure.

It’s 1 (c) that caught my attention, of course. I’ve written elsewhere about what you have to do to believe the results of your models: you have to be able to trace the results back to model specification, data and parameters. This means having good audit trails, thorough testing (and records of those tests) and effective version control.