Newsletter Old site

Newsletter Oct 2006

News update 2006-10: October 2006

1. Software versions
2. Bogus data
3. Is testing necessary?
4. Blogging and wikis
5. Newsletter information

1. Software versions

Want to lose 4.8 billion euros? It looks as if a good way to do it
is to make sure that different parts of your organisation are using
different versions of the same software package. The wiring
problems that have delayed (yet again) deliveries of the Airbus
A380 arose because incompatible versions of the CAD software were
being used. French and British engineers had upgraded to version 5,
while the German and Spanish engineers were still using version 4.
The two versions use different file formats.

How did this happen? It must have been a combination of factors.
First, the software manufacturer (Dassault in this case) changed
the file format without providing backwards compatibility. Then it
was decided that parts of Airbus should upgrade, and that parts
should not. Either those decisions were made independently, and
there is no overall software policy (which is a big problem) or
there were particular reasons for different parts of the
organisation to make different decisions, but nobody thought about
how they would then work together. Probably the truth is somewhere
between the two.

This isn’t, of course, a problem only with CAD software. It
shouldn’t surprise anyone that incompatibility problems can arise
with Excel too. There are still people using Excel 97, in which
macros written in later versions generally don’t work. And I’ve
come across macros written in Excel 2000 that don’t work in later
versions. In fact, to state the obvious, each new release of Excel
contains features that don’t work in earlier releases. More subtly,
some of the statistical functions were changed in Excel 2003, so
the results produced by a spreadsheet can depend on the version
under which it was last recalculated.

As Excel 2007 hits the streets (or rather desktops) incompatibility
problems are going to become more common. It’s actually going to
have a “compatibility mode” which will ensure “that content created
in the 2007 Office release can be converted or downgraded to a form
that can be used by previous versions of Office.” I like the use of
the word “downgraded” in that sentence. The trouble is, though,
that if you use the compatibility mode you won’t be able to take
advantage of all the new features.

IT departments are going to have to think carefully about their
upgrading strategy. However, even if individual organisations get
it right, there will still be problems when spreadsheets are sent
between organisations.

2. Bogus data

Computer models are all very well, but they are only as good as the
data that goes into them. Two Citibank traders recently pleaded
guilty to falsifying bank records and wire fraud. Among their
nefarious activities, they manipulated a computer model that was
monitoring options trading, by inputting bogus data. Apparently
they got a broker to supply them with false market quotes.

Maybe they had taken lessons from John Rusnak, the fraudster in the
AIB/Allfirst case. He manipulated the inputs into a spreadsheet
that monitored his trades, by making sure that exchange rate feeds
didn’t go in to it directly but through his own PC.

Deliberate data manipulation like this is always going to be a risk
when people’s remuneration depends on the results. Accidental
manipulation is always a risk, though, regardless of the uses to
which the model is put. That’s why it’s really important to have a
good audit trail from the source of the data right through to the
end results of the model.

A good audit trail is one that would make any discrepancy
immediately obvious, without requiring laborious manual
comparisons. There are various ways to accomplish this, depending
on the circumstances. However, there are also numerous ways to
invalidate an audit trail, and these are probably more common.
Obvious problems include documentation that doesn’t reflect the
actual procedures, lack of documentation, manual procedures such as
copying and pasting, over reliance on check totals, non-standard
items that get special treatment, and any stage in the process
where manual alterations are possible, whether deliberate or

3. Is testing necessary?

I think you can guess what my answer would be, but it appears that
not everyone has the same opinion. It’s often felt that testing is
expensive, and can be skimped (or skipped altogether) as the
benefits it provides aren’t worth it. This view assumes that the
testing process won’t uncover any significant problems. Experience
shows that this assumption is usually over-optimistic.

It’s important to remember, too, that testing isn’t just about
getting the calculations right (though that’s important). You may
also want to test performance (under both normal and abnormal
conditions) and usability. Doing the right thing covers every
aspect of a deployed system, and it’s important to get it all

A recent article reports the government as saying the following
about ID cards:

* It won’t be possible to test everything in advance
* They’ll use off-the-shelf technology for some parts; this will
have been adequately tested elsewhere
* Trials will have to be limited in order to stay within budget
* Instead of trials, they’ll use incremental roll-outs

So they will be testing, it’ll just be on live data (and hence real
people). And just because a product is off-the-shelf it doesn’t
mean it’ll work under all circumstances, especially if it’s part of
a larger system. Interfaces between different components are always
potentially dodgy.,39020654,39284263,00.htm

I can just see this whole ID card project heading in the same
direction as the NHS National programme for IT, which has become a
byword for disastrous IT projects. Not testing it properly is just
asking for trouble.

A recent survey suggests that many applications fail when they are
deployed. If you don’t plan for performance issues in advance,
during the development process, things can go pear shaped in the
production environment. Performance can be significantly affected
by network issues, for example: often, development takes place on a
LAN, but the production environment is a WAN. If you don’t test in
an environment as much like the production environment as possible,
you’re just not going to find the problems.

The report says “perhaps the most telling statistic from the survey
is that most IT departments (71 per cent) seem to rely on end users
calling the help desks to alert them that performance problems
exist. This means problems are only reported after their impact is

In other words, use your users as testers. They won’t mind, will

When it comes down to it, testing is useful. If you don’t look for
problems, you mightn’t find them until they are really

4. Blogging and wikis

I’ve started a new blog at It’s
likely that many, but not all, of the items in this newsletter will
be mentioned the blog first. There will also be things that appear in
the blog that don’t make it into the newsletter.

One of the GIRO working parties this year is on “Building an open
source ICA model”. If you’d like to join the working party, please
let me know. If you’re interested in what we’re doing, take a look
at our wiki at http:// You’ll need the
password (or a pbwiki ID) to edit it; again, let me know if you’d
like to contribute at that level, without committing to the working

5. Newsletter information

This is a monthly newsletter on risk management in financial services,
operational risk and user-developed software from Louise Pryor
( Copyright (c) Louise Pryor 2006. All
rights reserved. You may distribute it in whole or in part as long as
this notice is included.

To subscribe, email news-subscribe AT To unsubscribe,
email news-unsubscribe AT Send all comments, feedback
and other queries to news-admin AT (Change ” AT ” to
“@”). All comments will be considered as publishable unless you state
otherwise. The newsletter is archived at