Categories
Software

Software risks: testing might help (or not)

It’s good to test your software. That’s pretty much a given, as far as I’m concerned. If you don’t test it, you can’t tell whether it will work. It seems pretty obvious.

It also seems pretty obvious that a) you shouldn’t use test data in a live system, b) in order to test whether it’s doing the right thing, you have to know what the right thing is and c) your system should cope in reasonable ways with reasonably common situations.

If you use test data in a live system there’s a big risk that the test data will be mistaken for real data and give the wrong results to users. If you label all the test data as being different, or if it’s unlike real data in some other way, so that it can’t be confused with the real stuff, there’s a risk that the labelling will change the behaviour of the system, so the test becomes invalid. Because of this, most testing takes place before a system actually goes live. That’s all very well, unless a system’s outputs depend on the data that it’s used in the past. In that case you need to make sure that the actual system that goes live isn’t contaminated in any way by test data, otherwise you could, to take an example at random, accidentally downgrade France’s credit rating.

There’s a possibility that if you don’t have a full specification of a system, your testing will be incomplete. Well, it’s more of a certainty, really. This becomes an especial problem if you are buying a system (or component) in. If you don’t know exactly how it’s meant to behave in all circumstances, you can’t tell whether it’s working or not. It’s not really an answer just to try it out in a wide variety of situations, and assume it will behave the same way in similar situations in the future, because you don’t know precisely what differences might be significant and result in unexpected behaviours. The trouble is, the supplier may be concerned that a fully detailed specification might enable you to reverse engineer  the bought-in system, and would thus endanger their intellectual property rights. There’s a theory that this might actually have happened with the Chinese high speed rail network, which has had some serious accidents in the last year or so.

It can’t be that uncommon that when people go online to enter an actual meter reading, because the estimated reading is wrong, the actual reading is less than the estimated one. In fact, that’s probably why most people bother to enter their readings. So assuming that meter has gone round the clock, through 9999 units, to get from the old reading to the new one, doesn’t seem like a good idea. The article explains the full story — you can only enter a reduced reading on the Southern Electric site within 2 weeks of the date of the original one. But the time limit isn’t made clear to users, and not getting round to something within 2 weeks is, in my experience, far from unusual. Some testing from the user point of view would surely have been useful.

 

Categories
Data risk management

Fiddling the figures: Benford reveals all

Well, some of it, anyway. There’s been quite a lot of coverage in on the web recently about Benford’s law and the Greek debt crisis.

As I’m sure you remember, Benford’s law says that in lists of numbers from many real life sources of data, the leading digit isn’t uniformly distributed. In fact, around 30% of leading digits are 1, while fewer than 5% are 9. The phenomenon has been known for some time, and is often used to detect possible fraud – if people are cooking the books, they don’t usually get the distributions right.

It’s been in the news because it turns out that the macroeconomic data reported by Greece shows the greatest deviation from Benford’s law among all euro states (hat tip Marginal Revolution).

There was also a possible result that the numbers in published accounts in the financial industry deviated more from Benford’s law now than they used to. But it now appears that the analysis may be faulty.

How else can Benford’s law be used? What about testing the results of stochastic modelling, for example? If the phenomena we are trying to model are ones for which Benford’s law works, then the results of the model should comply too.

Categories
Old site

Software causes tube problems

Widespread delays to the London Underground this week were caused by one of the Tube’s infrastructure operators installing new software.

The new software was loaded over the weekend, presumably to minimise any disruption. There’s no indication of what actually went wrong, or whether it could have been prevented by better (or more, or any) testing.

Categories
Old site

Will your website work tomorrow?

A recent survey suggests that many websites won’t work well with IE7. Normally, it wouldn’t matter much if sites don’t work with a new browser, as take-up is typically pretty slow. However, many people will be upgraded to IE7 automatically.

I have to admit that I haven’t tested my site with IE7. I’m hoping that it will be OK, though, because I know it works with most other browsers. There are a number of sites out there that really only work with IE6, as they take advantage of its non-standard features. They are the ones that are likely not to work with IE7, which apparently has a different rendering engine.

In this case, it’s definitely a case of “do as I say, not as I do”: don’t skimp on the testing.

Categories
Old site

Don’t worry if it doesn’t work

I think this article is reporting the government as saying the following about ID cards:

  • It won’t be possible to test everything in advance
  • They’ll use off-the-shelf technology for some parts; this will have been adequately tested elsewhere
  • Trials will have to be limited in order to stay within budget
  • Instead of trials, they’ll use incremental roll-outs

So they will be testing, it’ll just be on live data (and hence real people). And just because a product is off-the-shelf it doesn’t mean it’ll work under all circumstances, especially if it’s part of a larger system. Interfaces between different components are always potentially dodgy.

Anyone want to bet that this huge IT project will be delivered and working on time and within budget? Or will it be like the NHS National programme for IT?

Categories
Old site

Testing is a function

When you draw up a specification for a software application that you are developing, are you sure it’s complete?

OK, let’s start again. You should always provide a specification for a software application before you develop it. This applies to everything, even a little one-off spreadsheet. Obviously in the latter case it needn’t be particularly detailed; a single sentence is sometimes adequate. However, remember that you can’t tell if the software is doing the right thing unless you know what the right thing is.

If it’s more complicated than a single calculation, the specification should be more detailed. Typically it would cover what actions the user should be able to perform, and give details of the calculations.

Which brings me to my point. Remember that testers are users too. Unless the application is tested, you won’t be able to tell if you’ve got it right. When you’re testing, you often want to do things that normal users can’t do, such as start afresh, or input large amounts of data. This is especially likely if it’s a database application. It’s extremely frustrating to find that it’s impossible to test an application properly because it doesn’t include the functionality that you need.

And yes, this is the voice of experience speaking.