Fourier and McCartney

Here’s a long but fascinating post on deciphering the opening chord in A Hard Day’s Night. Along the way it gives a good explanation of Fourier transforms for the non-mathematician.

It also gives a really good example of why it’s important to look at the overall reasonableness of a result, rather than blindly relying on the maths. Summary of this bit: somebody ran a Fourier analysis on the chord, assumed that the loudest frequencies were the fundamentals, and came up with the following notes:

  • Harrison (electric 12 string guitar): A2 A3, D3 D4, G3 G4, C4 C4
  • McCartney (electric bass guitar): D3
  • Lennon (acoustic 6 string guitar): C5
  • Martin (piano): D3 F3 D5 G5 E6

But why would Lennon play only one note? It was meant to be a dramatic opening, give it all you’ve got. Single notes just don’t cut it.

This observation led to a realisation that the assumption that the loudest frequencies were the fundamentals was flawed. Record producers often fiddle with the frequencies, and in this case it appears that the bass frequencies were turned down.

There’s lots of other interesting stuff in the post about how the author went about his detective work, and what really hits home to me is the variety of techniques he used. The moral of the story is to use all the information at your disposal, not just the maths.


It’s all biblical

Frances Coppola has an interesting take on how the positions of both women and men in society are changing.


It’s the economics, stupid

We often hear that Big Pharma doesn’t develop drugs that treat diseases generally found in the less well off parts of the world, because there’s no money in it. We also hear that antibiotic-resistant superbugs are on the rampage, and that there’s no hope of outwitting them.

OK, what we actually hear is that bacteria are developing resistance to all the known antibiotics. What we don’t usually hear is that the reason that new antibiotics aren’t being discovered is that there’s no money in developing new ones (or not as much money as in other drugs, anyway).

For many years, the problem of resistance remained hypothetical because drug companies were consistently introducing new types of antibiotic. But antibiotics are a one-and-done treatment, and are far less profitable for Big Pharma than drugs taken daily to treat chronic conditions and, preferably, have no competition to keep costs down. … We’re in an evolutionary war with bacteria, but one of our best defenses pulled out due to economic incentives …

There’s more in this fascinating article. HT Izabella Kaminska.


Modern technology

What’s the one piece of technology that no high tech company is without?

Whiteboards. To my mind, a huge improvement over their low tech predecessor, the blackboard, but some disagree. They prefer the tactile feel of chalk: I hate the dust, and even thinking about the scraping noise of back of the blackboard rubber against the board sets my teeth on edge.

The dominance of whiteboards rings true, though. I don’t think I’ve ever worked anywhere with enough whiteboards in the meeting rooms (or enough meeting rooms, where it’s been open plan offices, come to that).


Fighting link rot

We’ve all come across signs of link rot, the phenomenon by which material we link to on the distributed Web vanishes or changes beyond recognition over time. In all cases it’s annoying to follow a link to a page that no longer exists, but sometimes it really matters.

Jonathan Zittrain writes:

We found that half of the links in all Supreme Court opinions no longer work.  And more than 70% of the links in such journals as the Harvard Law Review (in that case measured from 1999 to 2012), currently don’t work.

That’s bad. But they’re doing something about it. The Harvard Law School Library, together with around 30 other libraries around the world, has started, which will provide a facility for permanent archiving and links. It’s a great idea. I think it’s good that it’s curated: permanent links can only be created by authorised users, who are typically scholarly journals.

There are other efforts to fight link rot, too, with different priorities.

Interesting Software

Reinhart and Rogoff: was Excel the problem?

There’s a bit of a furore going on at the moment: it turns out that a controversial paper in the debate about the after-effects of the financial crisis had some peculiarities in its data analysis.

Rortybomb has a great description, and the FT’s Alphaville and Tyler Cowen have interesting comments.

In summary, back in 2010 Carmen Reinhart and Kenneth Rogoff published a paper Growth in a time of debt in which they claim that “median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower.” Reinhart and Rogoff didn’t release the data they used for their analysis. Since then, apparently, people have tried and failed to reproduce the analysis that gave this result.

Now, a paper has been released that does reproduce the result: Herndon, Ash and Pollin’s Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff,

Except that it doesn’t, really. Herndon, Ash and Pollin identify three issues with Reinhart and Rogoff’s analysis, which mean that the result is not quite what it seems at first glance. It’s all to do with the weighted average that R&R use for the growth rates.

First, there are data sets for 20 countries covering the period 1946-2009. R&R exclude data for three countries for the first few years. It turns out that those three countries had high debt levels and solid growth in the omitted periods. R&R didn’t explain these exclusions.

Second, the weights for the averaging aren’t straightforward (or, possibly, they are too straightforward). Rortybomb has a good explanation:

Reinhart-Rogoff divides country years into debt-to-GDP buckets. They then take the average real growth for each country within the buckets. So the growth rate of the 19 years that the U.K. is above 90 percent debt-to-GDP are averaged into one number. These country numbers are then averaged, equally by country, to calculate the average real GDP growth weight.

In case that didn’t make sense, let’s look at an example. The U.K. has 19 years (1946-1964) above 90 percent debt-to-GDP with an average 2.4 percent growth rate. New Zealand has one year in their sample above 90 percent debt-to-GDP with a growth rate of -7.6. These two numbers, 2.4 and -7.6 percent, are given equal weight in the final calculation, as they average the countries equally. Even though there are 19 times as many data points for the U.K.

Third, there was an Excel error in the averaging. A formula omits five rows. Again, Rortybomb has a good picture:



So, in summary, the weighted average omits some years, some countries, and isn’t weighted in the expected way. It doesn’t seem to me that any one of these is the odd man out, and I don’t think it really matters why either of the omissions occurred: in other words, I don’t think this is a major story about an Excel error.

I do think, though, that it’s an excellent example of something I’ve been worried about for some time: should you believe claims in published papers, when the claims are based on data analysis or modelling?

Let’s consider another, hypothetical, example. Someone’s modelled, say, the effects of differing capital levels on bank solvency in a financial crisis. There’s a beautifully argued paper, full of elaborate equations specifying interactions between this, that and the other. Everyone agrees that the equations are the bee’s knees, and appear to make sense. The paper presents results from running a model based on the equations. How do you know whether the model does actually implement all the spiffy equations correctly? By the way, I don’t think it makes any difference whether or not the papers are peer reviewed. It’s not my experience that peer reviewers check the code.

In most cases, you just can’t tell, and have to take the results on trust. This worries me. Excel errors are notorious. And there’s no reason to think that other models are error-free, either. I’m always finding bugs in people’s programs.

Transparency is really the only solution. Data should be made available, as should the source code of any models used. It’s not the full answer, of course, as there’s then the question of whether anyone has bothered to check the transparently provided information. And, if they have, what they can do to disseminate the results. Obviously for an influential paper like the R&R paper, any confirmation that the results are reproducible or otherwise is likely to be published itself, and enough people will be interested that the outcome will become widely known. But there’s no generally applicable way of doing it.


Does size matter?

Over the last few months there have been several interesting pieces about innovation, the size of companies, and other rather loosely connected topics.

Back in December, the Schumpeter column in the Economist reviewed an article arguing that large firms are often more innovative than small ones. This seems counter-intuitive – surely small companies are nimble and creative, and large ones are stultified by bureaucracy. But it turns out that there are good reasons why intuition may be wrong here, although Schumpeter thought that there are limits to the argument: large firms may be innovative, but it’s usually incremental innovation rather than disruptive innovation.

This all fits in quite well with an observation a friend and I made recently when discussing discrimination in employment. You might think that there will be less diversity in large, established companies than in small companies, especially high-tech startups. But in our experience the converse is true. We thought it was probably because large companies have better processes in place, are more conscious of the issues and are less likely to hire on the basis of existing friendships. Also, of course, they have larger workforces, so can reflect population diversity more easily than a small company in which every person constitutes say, 10% of the employees.

Conventional wisdom also has it that most job creation happens in small companies. Well, up to a point. Recent research has found that in fact once you control for the age of the firm there is no systematic relationship between firm size and employment growth, in the USA at least. Small firms tend to be younger — if they are older, then they are less successful (else they’d be bigger). (I can’t resist: repeat after me, correlation is not causation).

The Economist has a leader this week that discusses the contrast between large and small firms. It argues that, on the whole, economies relying more on small firms have been less successful than those with more large firms (think northern and southern Europe). Apparently productivity is much greater in large firms, at least partly because of economies of scale. Don’t have special regulatory and fiscal breaks for small firms, the leader argues, but go for growth instead.



Interesting links

I found these interesting:

  1. Kaprekar’s constant — not everything has to be useful to be appealing and fun.
  2. Apparently the Roman Empire was more equal than the USA, while in Britain income inequality rose faster between 1975 and 2008 than in any other OECD member country.
  3. How to get your keys back if you drop them down a drain.
  4. Talking about big numbers
  5. The UK opens up NHS data, and the EU announces an ‘open by default‘ position for public sector information.

Interesting links

There’s some cool stuff here:

  1. Are shredders still useful? From the results of a recent DARPA unshredding contest, I’d say they mostly are. Hat tip Bruce Schneier.
  2. Good arguments for transparency in the corporate world.
  3. An economist would say “what would I do if I were a horse?
  4. One of the best advent calendars on the web.
  5. The PC is dead.

Interesting links

I’ve found these interesting, in one way or another:

  1. Is the eurozone a casino? The current betting strategy is madness.
  2. Do what I say, not what I do.
  3. xkcd’s really on a roll at the moment — one for mathematicians.
  4. One for your Christmas list. It would be so cool!
  5. You should choose your Christmas cards carefully if you’ve got any astronomer friends.