Frances Coppola has an interesting take on how the positions of both women and men in society are changing.
I just can’t resist this one. Vista Equity Partners is paying around $100 million less than expected for Tibco Software Inc because Goldman Sachs got the number of shares wrong in the spreadsheet that did all the calculations. OK, $100 million isn’t much in the context of a $4 billion deal, but it’s an awful lot of money in any other context. But only just over twice Goldman’s fees to Tibco for the transaction. It’s not clear how the mistake arose.
How much of the Earth’s currently-existing water has ever been turned into a soft drink at some point in its history?
His answer, by the way, is “not much”. On the other hand, almost all of it has been drunk by a dinosaur.
One of the things I really like about these “what-if” answers is the way they demonstrate one of the important aspects of modelling: working out what’s significant and what’s not. And significance depends very much on what the purpose of the model is. Often, Munroe can make some really sweeping assumptions that are clearly not borne out in practice, but are equally clearly the right approximations to make for his purposes. And sometimes he says that he doesn’t know what assumption to make.
An example of a sweeping assumption comes in the answer to
How close would you have to be to a supernova to get a lethal dose of neutrino radiation?
where he assumes that you’re not going to get killed by being incinerated or vaporised.
And in answering the question
When, if ever, will Facebook contain more profiles of dead people than of living ones?
the difficult assumption is whether Facebook is a flash in the pan, and stops adding new users, or whether it will become part of the infrastructure, and continue adding new users for ever (or at least for 50 or 60 years). There are also some sweeping demographic assumptions, of course.
(and while you’re at it, read the one on stirring tea)
I’m reminded of two things here. The first is doing mechanics problems in A-level maths: there was nothing difficult about the maths involved, the trick was all in recognising the type of problem. Was it a weightless, inelastic string, or a frictionless surface? It was all about building a really simple model.
The second is those Google interview questions we used to hear so much about, like how many golf balls fit in a school bus, or how many piano tuners there are in the world. The trick with these is to come up with a really simple model and then make reasonable guesses for the assumptions. And, of course, be aware of your model’s limitations.
… but some are more wrong than others. It’s emerged that a risk calculator for cholesterol-related heart disease risk is giving some highly dubious results. So completely healthy people could start taking unnecessary drugs. It’s not clear if the problem is in the specification or the implementation: but either way, the results seem rather dubious.
The answer was that the calculator overpredicted risk by 75 to 150 percent, depending on the population. A man whose risk was 4 percent, for example, might show up as having an 8 percent risk. With a 4 percent risk, he would not warrant treatment — the guidelines that say treatment is advised for those with at least a 7.5 percent risk and that treatment can be considered for those whose risk is 5 percent.
According to the New York Times (may be gated), questions were raised a year ago, before the calculator was released, but somehow the concerns weren’t passed on to the right people. It’s difficult to tell from a press article, but it appears as if those responsible for the calculator are reacting extremely defensively, and not really admitting that there’s anything wrong with the model.
while the calculator was not perfect, it was a major step forward, and that the guidelines already say patients and doctors should discuss treatment options rather than blindly follow a calculator
Of course they’re right, in that you should never believe a model to the exclusion of all other evidence, but it’s very difficult for non-experts not to. Somehow, something coming out of a computer always seems more reliable than it actually is.
I’ve been recently been working with the Centre for Risk Studies in Cambridge on some extreme scenarios: one-in-200, or even less likely events. It’s been an interesting challenge, not least because it’s very difficult to make things extreme enough. We find ourselves saying that the step in the scenario that we’re working would never actually happen, because not everything would go wrong in the right way. But of course that’s just the point: we’re looking at swiss cheese situations.
A couple of times we’ve dreamt up something that we thought was really unlikely, only for something remarkably similar to turn up in the news. We came up with the idea that data could be irretrievably corrupted, and a few days later found ourselves reading about a Xerox copier that irretrievably corrupted the images.
People following this story fall into a few different camps. Many believe everything he says — or at least most of it — is true. Others think he’s perpetrating a huge social engineering experiment, to see what he can get the world and the media to swallow. A third camp believes he’s well-intentioned, but misguided due to security paranoia nurtured through the years.
The thing is, the individual pieces of the scenario are all just about possible. But is it possible for them all to happen in a connected way? For the holes in the swiss cheese to line up?
The absolutely amazing thing about this story is that nearly everything Ruiu reveals is possible, even the more unbelievable details. Ruiu has also been willing to share what forensic evidence he has with the public (you can download some of the data yourself) and specialized computer security experts.
Where developments start getting preposterous, no matter how much leeway you give him, is how many of the claims are unbelievable (not one, not two, but all of them) and why much of the purported evidence is supposedly modified by the bad guys after he releases it, thus eliminating the evidence. The bad guys (whoever they are) are not only master malware creators, but they can reach into Ruiu’s public websites and remove evidence within images after he has posted it. Or the evidence erases itself as he’s copying it for further distribution.
Again, this would normally be the final straw of disbelief, but if the malware is as devious as described and does exist, who’s to say the bad guys don’t have complete control of everything he’s posting? If you accept all that Ruiu is saying, there’s nothing to prove it hasn’t happened.
I don’t know. I haven’t looked into the details at all, and probably wouldn’t understand them even if I did. But there’s certainly a lesson here for those of us developing unlikely scenarios: it’s difficult to make things up that are more unlikely than things that actually happen.
A few days ago I noted the difficulty of thinking in terms of dependency ratios: being economically active is a continuum, rather than black or white. There’s another side to the story, too. An aging population can provide opportunity, not only by producing products that appeal directly to a growing segment of the population, but also by providing services to help care for them.
We all know that feeling: people are talking about something as if they expect you know what it is, or understand it, and you’re going to look really stupid if you admit ignorance. It’s a common phenomenon in all sorts of fields, not least when technical matters are concerned, as discussed in this article.
Not long before, I had started noticing a habit I had, a tendency to nod or make vague assentive noises when people around me talked about things I’d never heard of.
When I did this, my motivation wasn’t to claim knowledge I didn’t have as much as to deflect a need for outright admission of ignorance. I’d let the moment glide past and later scamper off to furtively study up.
Farnam Street points out that this can cause problems in organisations:
In group settings, this has lead to what psychologists call ‘pluralistic ignorance,’a psychological state characterized by the belief that one’s private thoughts are different from those of others. This causes huge problems in organizations.
Consider an example. You’re in a large meeting with the senior management of your organization to discuss an initiative that spans across the organization and involves everyone in the room. You hear words come out, someone may even ask you, do you follow? And yes, of course you follow — you don’t want to be the only person in the room without a clue.
… So you walk out of the room wondering what you just agreed to do. You have no idea. Your stress goes up, you run around asking others, and quickly discover they are just as confused as you are.
It sounds bad, sure, but it’s even worse if you’re there in order to provide challenge. Challenge is difficult, but it’s really important: in fact it’s the primary purpose of many committees, up to and including the Board level.
It’s a commonplace that high-achieving women often suffer from the imposter syndrome — a belief that they do not deserve the success they have achieved. Athene Donald has some interesting posts about it in the context of academia.
It’s also commonly observed that women tend towards self-deprecation. Sometimes it’s in jest, but not always. Lucy Kellaway wrote about the tendency the other week (may be gated for you).
This is meant to be one of the ways women sabotage themselves. We talk ourselves down and by doing so, we hold ourselves down. Even women who have managed to rise go on harming themselves by incontinently banging on about how hopeless they are.
She goes on to point out that actually it’s a rather effective tactic, as long it’s completely clear that you are not remotely useless in the context being discussed. Tony Blair and Boris Johnson are past masters of the technique.
Self-deprecation is only dangerous if there is any chance at all that the person you are talking to might agree with it. … Only when it is clear to everyone that a woman’s skill is beyond doubt will it be time for her to start telling everyone that she is useless.
So, taking successful men as our model doesn’t always work. Patterns that work for them may not work for us. If people are predisposed to think that we aren’t completely on top of what we are doing, admitting any possibility of failure will be taken at face value. It’s only if people are completely confident that we are doing a good job that any hint of self deprecation won’t be pounced on.
Frances Coppola makes some interesting points about dependency ratios, sparked by this article from The Economist. We often see charts showing the proportion of the population aged over 65 compared to those between 16 and 64, based on the assumption that the former aren’t working and the latter are.
The trouble is, as Frances points out, that the assumption is a massive over simplification. At the younger end, there are a lot of young people in education. In the middle, you’ve got the unemployed and disabled, those not working through choice, and those that are working but who also receive benefits. And at the older end there are increasing numbers of people who are both working and drawing pensions. Being economically active is not an all or nothing state.
Frances argues that, on the whole, there are few people over 65 who are not partially or fully dependent. But the main reason that the raw ratio is misleading is the large number of younger people who are also partially or fully dependent.
The dependency ratio is a crude measure that takes no account of the actual economic contributions made by people in different circumstances and at different stages in their lives. A few over-65s working mainly part-time to top up their state pensions doesn’t invalidate the ONS’s dependency ratio calculation. But a large number of people dependent on state benefits to top up their wages does. We don’t just have a demographic problem. We have a low wage problem.
High tech risks are out there, and are potentially serious, but low tech risks don’t go away, and may be just as serious.
For example, we learned recently that Edward Snowden managed to get hold of peoples’ user ids and passwords, giving him unauthorised access to some of the classified information that he then leaked.
I’ve worked in several organisations where it was standard practice for sysadmins to ask me for my password when they needed to fix a problem on my machine. I would always complain, but there was little I could about it: especially as in one case, you weren’t allowed to change your password twice within (say) three days. And there are any number of websites that first insist that you register with them in order to make full use of the site, then confirm your password by email after you’ve registered and, sometimes, whenever you change it. They are getting to be less common, but they still exist.
And then you get the problem of your bank ringing you up out of the blue, and asking you to confirm your identity. No, sorry, I don’t give out personal information over the phone to unknown callers.
It’s difficult enough to keep track of passwords without reusing them. I have a reasonably simple scheme, based on a standard stem with additions based on the site address, but some organisations insist on a rather longer password than I usually use, or require some special characters, or forbid the use of others. It’s especially annoying that the most fussy sites seem to be ones that aren’t particularly sensitive, in that they don’t have any personal information.
So I can’t rely just on my memory and use LastPass to record passwords, memorable phrases, dates, and answers to all those security questions that don’t actually have obvious answers.
In general, it seems to me that there are still too many organisations that don’t follow good practice, and require risky behaviour from users. Things don’t seem to change much: I’ve written about this before.