At a press conference on 8th April 2003, Admiral Hal Gehman, Chairman of the Columbia Accident Investigation Board, discussed the model that was used to analyse the impact damage due to debris. If you recall, the prevalent theory at the time was that this was a major cause of the disaster.
He said “It’s a rudimentary kind of model. It’s essentially an Excel spreadsheet with numbers that go down, and it’s not really not a computational model.” The implication seemed to be that computational models and Excel spreadsheets are incompatible.
However, this is not the case. The real problem with the model was not its implementation, but its basic structure. Apparently it’s a lookup table, populated with data from controlled experiments. Unfortunately the piece of debris under consideration is thought to have had a mass of about 1kg, much larger than any of the experimental objects. The trouble with lookup tables is that they are not much good when it comes to extrapolation beyond the limits of the data.
A predictive model would obviously be more computationally complex, but that does not mean that it would not be possible to implement it in Excel. If the financial services industry is anything to go by, computational complexity has never been a reason for avoiding Excel. On the other hand, implementation in Excel might well be inadvisable, because there are few Excel developers who have the software engineering background to build a sufficiently well tested and robust implementation.