Categories
Notes Old site

Risk maps

The purpose of a risk map is to help you decide what to do about your risks. I’ve seen the term applied to several different things; this note sets out what my understanding of the topic.

The important properties of a risk map are:

  • Includes all the relevant risks;
  • Includes some sort of ranking or assessment of each risk;
  • Each risk is mapped back to the organisational structure in some useful way.

Another term often used in this context is risk profile. If the properties listed above seem a bit vague, that’s because risk maps can be used in all sorts of different situations, and being any more specific would rule some of them out.

We can consider a couple of simple examples to make things a bit clearer.

FSA risk assessment matrix

The risk assessment matrix produced by the FSA as part of its ARROW risk assessment framework is a risk map. (The matrix is available in both The firm risk assessment framework and Building the new regulator: Progress report 2 — see below.)

It includes all the risks that the FSA are interested in, they are each given a probability score, and the risks are mapped on to the risks to the FSA’s objectives. The completed matrix is used by the FSA to decide on any remedial actions that should be taken by the firm being assessed.

Internal risk management

An organisation’s internal risk management processes might also make use of a risk map.

The risks included in the map would be decided during the identification stage; it’s important to make sure that all the risks that the organisation faces are included.

A simple method of assessment is to assign to each risk a qualitative value for impact or consequences and one for frequency or probability. In each case, a simple low/medium/high classification is often used.

A simple matrix can then be used to assign a single grade to each risk: for example, a high impact/high frequency risk might be ranked as avoid while a low/low risk might be ranked as ignore. Other possibilities include insure, control, and transfer.

A structure that is frequently found is for risks to be grouped by functional area. Relating them to the organisational structure in this way helps to decide how to control them.

Categories
Newsletter Old site

Newsletter May 2003

News update 2003-05: May 2003
===================

A monthly newsletter on risk management in financial services,
operational risk and user-developed software from Louise Pryor
(http://www.louisepryor.com).

Comments and feedback to news-admin@louisepryor.com. Please tell me if
you don’t want to be quoted.

Subscribe by sending an email to news-subscribe@louisepryor.com.
Unsubscribe by sending an email to news-unsubscribe@louisepryor.com.
Newsletter archived at http://www.louisepryor.com/newsArchive.do.

In this issue:
1. Human error
2. Non human error?
3. FSA update
4. Correspondence
5. Newsletter information

===============
1. Human error

CompTIA, the Computing Technology Industry Association, recently
released a white paper on computer security. They surveyed 638
professionals in North America. The principal results were:

– 31% had between one and three major security breaches in the last
six months.
– Human error was the primary cause of 34% of the most recent
breaches; intentional action 29%, and technical malfunction 8%,
and combination of human error and technical malfunction
29%. Human error was thus implicated in 63%.
– 57% of organisations had no comprehensive written IT security
policy in place.
– About 20% of organisations had no IT staff with security-related
training.

The survey apparently included respondents from the educational,
governmental, financial, and IT sectors among others. There seems
no real reason to think that the situation is substantially
different in the UK.

Human error is clearly a big problem; the figures above indicate
that in a six month period it may cause at least one major security
breach in 20% of organisations. The report doesn’t analyse the
causes in any more detail, but two possible reasons for human error
are poor user interfaces and documentation combined with complex
systems, and lack of training. Human error can occur among users as
well as among IT professionals; passwords are an obvious problem
area, as is software installation.

The report also shows that many organisations don’t take computer
security seriously. The “it won’t happen to us” mentality is
clearly alive and kicking. Human error is much more likely where
there is no comprehensive IT security policy in place; people may
simply not know that their behaviour is dangerous.

As in so many areas, ignorance is the big problem: people either
don’t know what they should (or shouldn’t) be doing, or they know
the correct procedures and ignore them. Education and training are
necessary, but so is a commitment to good practice that runs
throughout the organisation.

Committing to Security: A CompTIA Analysis of IT Security and the
Workforce is available at
http://www.comptia.org/research/whitepapers.asp?topic=Security

===============
2. Non human error?

You might think that one way to eliminate human error is to
automate the process, whatever it might be. However, automation can
lead to different problems, and anyway people are nearly always
involved at some stage.

A woman in Seattle is suing Farmers Insurance Company of
Washington, claiming that the company uses an expert system to
provide inaccurately low estimates of automobile-related
injuries. She argues that she would have taken her business
elsewhere if she had known the company would use a computer to
judge how much her life was worth. “They didn’t even come out to
see me,” Barbara Martin said. “How could they know me or what
happened to me?”

The expert system in question is Colossus, a product of Computer
Sciences Corporation (CSC). It is fairly clear from CSC’s web site
that Colossus, which they describe as “the industry’s leading
expert system for evaluating bodily injury claims,” is used
interactively by a claims adjuster. Indeed, it’s difficult to see
how it could be otherwise. The results produced by Colossus thus
depend on what information is available to the claims adjuster and
how that information is interpreted. As usual, “Garbage In, Garbage
Out” applies.

The complaints about Colossus aren’t new. Two former employees of
Farmers were sued for saying that Colossus places unfairly low
values on personal-injury claims. They claimed that Farmers
adjusted Colossus so that its estimates were consistently below
those of experienced claims adjusters.

I know no more about this case and Colossus than I have read in the
papers and on CSC’s web site (see references at the end of this
section), but it seems to me that several issues are being
conflated here.

– Mrs Martin says that she was never visited by a claims adjuster,
and that Farmers therefore couldn’t have had accurate information
about her injuries and their effects. This issue is independent
of whether Colossus was used or not.

– People don’t trust computers. They don’t like the idea of
decisions that affect them significantly being made by a
collection of silicon chips. On the whole, people doubt whether
computers can understand the subtleties of their particular
circumstances: we see this to a lesser extent with credit rating
systems, for example.

– Although CSC claim that Colossus should be used as a guide rather
than as an infallible source of estimates, it is very possible
that in practice it is rare for claims adjusters to disagree with
the numbers it produces. This effect might be due to corporate
culture, pressure on people to conform, but may also be because,
somewhat inconsistently, people do seem to trust computer systems
that they actually use themselves.

– There is a belief that this particular expert system has been
made to produce artificially low claims estimates. Of course it’s
entirely possible that its estimates are below those of human
claims adjusters. Possibly Farmers had thought that its claims
were getting out of hand. However, a reduction could have been
accomplished without Colossus by simply instructing the adjusters
to reduce their estimates. The use of Colossus would however help
to enforce a general lowering of estimates.

On the whole it looks as if the use of Colossus is a peg on which
various concerns are being hung. Clearly its use is perceived as
threatening by both customers and employees. Possibly management
are using it as a shield, rather than taking responsibility for
events themselves.

There is some good scaremongering going on in the press: “About
half of insurance companies that operate in the United States,
including some of the largest, such as Aetna, Hartford Financial
Services and Zurich Personal Injury, use the Colossus program. But
those companies have done a good job suppressing information about
the program. Colossus is not mentioned in insurance policies or
advertising brochures. Neither is the fact that claims are being
adjusted by computer.”

A description of Colossus can be found at
http://www.csc-fs.com/MARKETS/detail/cc_COLOSSUS.asp
There’s coverage of the story at
http://seattlepi.nwsource.com/local/122105_colossus15xx.html
and http://seattlepi.nwsource.com/local/93620_insurance31.shtml

===============
3. FSA update

Looking at the lists of new consultations and feedback out within
the last month I was struck by the imbalance: 8 new CPs, and
feedback to only 4 CPs. Are things getting out of hand? When I
investigated further, however, I was less worried. A number of CPs
never get explicit feedback documents of their own, but are dealt
with in the monthly Handbook Notices. These are mostly “small” CPs,
such as the series on Miscellaneous amendments to the Handbook,
which often have very few responses or even none at all.
Unfortunately the relevant Handbook Notices aren’t referenced on
the main CP web pages, which continue to say that “Response Paper
will be available at the end of the consultation process”. I have
also found at least one CP that does have feedback, including a
policy statement, that is not referenced from the main CP page
(CP138).

So if you are particularly eager to find out if there is feedback
on a CP, your best bet is to use the FSA search facility, which
works very well, to search for all references to it (eg, entering
CP138 as the search term will show you the relevant feedback).

New consultation and discussion papers out this month:
—————————————————–

CP179 The Authorisation manual – Draft perimeter guidance on
activities related to pension schemes
CP180 Fees for mortgage firms and insurance intermediaries
CP181 The Interim Prudential Sourcebooks for Insurers and Friendly
Societies: Implementation of the Solvency I Directives
(2002/12/EC and 2002/13/EC)
CP182 Proposed changes to the Listing Rules to take account of the
introduction of treasury shares
CP183 Standardising past performance – Including feedback on CP132
CP184 Miscellaneous amendments to the Handbook (No.8)
CP185 The CIS sourcebook – A new approach
CP186 Mortgage regulation: Draft conduct of business rules and
feedback on CP146

Feedback published this month:
—————————–

CP132 The presentation of past performance and bond fund yields in
financial promotions
CP146 The FSA’s approach to regulating mortgage sales
CP149 Market abuse: Pre-hedging convertible and exchangeable bond
issues
CP168 Fees 2003/04

DP17 Short selling

Current consultations, with dates by which responses should be
received by the FSA, are listed at
http://www.fsa.gov.uk/pubs/2_consultations.html

===============
4. Correspondence

Contributions are anonymous this month as I hadn’t warned people
that they might be quoted. In future, I’ll use your name unless you
say otherwise.

Last month I chatted about the problems of public holidays in
Scotland, where they are both different from those in England and
vary between cities. A reader wrote:

On the subject of Bank Holidays, I was surprised shortly after I
moved to India to find that unlike England, where I think Bank
Holidays are called that because even bank staff get them, here
Bank Holidays are called that for the opposite reason – only
bank staff get them.

I also talked about how difficult it is to ensure that software has
no errors, unless it has been so rigorously specified that it can
be proved to be correct. I received the comment:

and in that case, the specification will contain bugs 🙂

or the proof will…

===============
5. Newsletter information

This newsletter is issued approximately monthly by Louise Pryor
(http://www.louisepryor.com). Copyright (c) Louise Pryor 2003. You
may distribute it in whole or in part as long as this notice is
included. To subscribe, email news-subscribe@louisepryor.com. To
unsubscribe, email news-unsubscribe@louisepryor.com. All comments,
feedback and other queries to news-admin@louisepryor.com. Archives
at http://www.louisepryor.com/newsArchive.do.

Categories
Newsletter Old site

Newsletter Apr 2003

News update 2003-04: April 2003
===================

A monthly newsletter on risk management in financial services,
operational risk and user-developed software from Louise Pryor
(http://www.louisepryor.com).

Comments and feedback to news-admin@louisepryor.com. Subscribe by
sending an email to news-subscribe@louisepryor.com. Unsubscribe by
sending an email to news-unsubscribe@louisepryor.com. Newsletter
archived at http://www.louisepryor.com/newsArchive.do.

In this issue:
1. Troubles never come singly
2. Troubles come in threes (or more)
3. It’s the model that matters
4. FSA update
5. Public Holidays
6. Newsletter information

===============
1. Troubles never come singly

On 9th April the public power supply at Demon Internet’s Network
Operations Centre at Finchley failed. The standby generator started
up as expected; but a fault then occurred in the power control
system so that it couldn’t be used to run the equipment. At this
stage the backup batteries were the only source of power.
Unfortunately they ran out before the power control system could be
put back in action. This wasn’t really surprising, as they are only
intended for use while the generator is being started up. Several
services were affected.

The public power supply was eventually restored some hours later,
but meanwhile there had been a build up of email. Quotas had been
impose on customers during the outage, and some messages were
returned to sender as quotas were exceeded. There were also some
messages that were corrupted during the outage, and could not be
delivered at all.

The problem is that you can’t rely on just one thing going wrong at
a time. And even if Demon had had another line of defence (after
the batteries), there is no guarantee that it wouldn’t have gone
wrong too. However much you try to control risk by taking
preventative action, you just can’t be sure that you’ve done enough
– and it may not be cost effective, anyway.

Demon press releases can be found at
http://www.demon.net/helpdesk/announce/2003/da2003-04-10a.shtml
http://www.demon.net/helpdesk/announce/2003/da2003-04-15a.shtml

===============
2. Troubles come in threes (or more)

During March this year one of Danske Bank’s two main operating
centres was out of action for a week. During this period the bank’s
trading desks, currency exchange and communications with other
banks were shut down. Reports say that the episode had some effects
on the Danish economy; that the Nationalbanken was forced to inject
5 billion kroners into the banking sector to help push transactions
through; and that direct and indirect losses to Danske Bank could
amount to 50 million kroners ($7.2 mill USD).

It all started during the routine replacement of a defective
electrical unit in an IBM disk system. There was an electrical
outage in the disk system, which caused operations at the operating
centre to come to a halt. A few hours later, the disk system was
operational again and the overnight batch runs were started. It
soon became evident that they were not running correctly.

Apparently there was a software bug in the DB2 database system that
Danske Bank uses, and although the database system had restarted
normally after the breakdown there were inconsistencies in the
data. This bug had been present in all similar DB2 systems
installed since 1997, but this was the first time that the right
(or wrong) combination of circumstances had occurred to trigger the
problem.

Worse was to come. During the data recovery process, which in the
end took four days, three more hitherto unknown bugs were
discovered in DB2. The final one (and, reading between the lines of
the Danske Bank report, the final straw) was a problem that
“resulted in new episodes of inconsistent data that had to be
recreated by other methods. This made the process longer and more
complicated.” They eventually used back up data from their other
main operating centre, rather than wait for the software patch from
IBM.

Things could have been worse. Because Danske Bank had two operating
centres, some of their services were completely unaffected.
Moreover, it looks as if their backup (and restoration) procedures
worked when they needed to.

In 1789 Benjamin Franklin wrote “In this world nothing can be said
to be certain, except death and taxes.” Nowadays we should add
software bugs to the list. Until software has been tested under
every possible combination of circumstances, or unless it has been
so rigorously specified that it can be proved to be correct, it is
likely to contain bugs, and those bugs may cause significant
problems.

There’s a brief description of what happened at
http://www.theregister.co.uk/content/53/30095.html
Danske Bank’s report on the incident can be found at
http://frequyff.notlong.com

===============
3. It’s the model that matters

At a press conference on 8th April, Admiral Hal Gehman, Chairman of
the Columbia Accident Investigation Board, discussed the model that
was used to analyse the impact damage due to debris. If you recall,
the prevalent theory is that this was a major cause of the
disaster.

He said “It’s a rudimentary kind of model. It’s essentially an
Excel spreadsheet with numbers that go down, and it’s not really
not a computational model.” The implication seems to be that
computational models and Excel spreadsheets are incompatible.

However, this is not the case. The real problem with the model was
not its implementation, but its basic structure. Apparently it’s a
lookup table, populated with data from controlled experiments.
Unfortunately the piece of debris under consideration is thought to
have had a mass of about 1kg, much larger than any of the
experimental objects. The trouble with lookup tables is that they
are not much good when it comes to extrapolation beyond the limits
of the data.

A predictive model would obviously be more computationally complex,
but that does not mean that it would not be possible to implement
it in Excel. If the financial services industry is anything to go
by, computational complexity has never been a reason for avoiding
Excel. On the other hand, implementation in Excel might well be
inadvisable, because there are few Excel developers who have the
software engineering background to build a sufficiently well tested
and robust implementation.

The transcript of the press conference is at
http://www.caib.us/news/press_briefings/pb030408.html

===============
4. FSA update

Callum McCarthy has been appointed as the new Chairman of the FSA,
taking over from Howard Davies on 22nd September. Unlike Davies,
McCarthy will not combine the position with that of Chief
Executive. The plan is to appoint a new Chief Executive before
September.

According to my count, when McCarthy joins the Board of the FSA
joins there will be thirteen external members, seven of whom have
been in the banking industry at some time during their careers.
There are no external members from the insurance industry, and only
one from investment management.

New consultation and discussion papers out this month:
—————————————————–

CP176 Bundled Brokerage and Soft Commission Arrangements
CP177 Lloyd’s policyholders: Review of compensation arrangements
CP178 Review of prudential regulation of the Lloyd’s market

Feedback published this month:
—————————–

CP148 The FSA’s approach to the use of its powers under
The Unfair Terms in Consumer Contracts Regulations 1999

DP16 Hedge funds and the FSA

Current consultations, with dates by which responses should be
received by the FSA, are listed at
http://www.fsa.gov.uk/pubs/2_consultations.html

===============
5. Public Holidays

The annual confusion has started again. Those of you who live in
England (or many other countries) probably expect to believe their
diaries when the word “Bank Holiday” appears. Those of us in
Scotland know better.

Many bank holidays are public holidays only in England and
Wales. There are no equivalents in Scotland: when the public
holidays are depends on the city. In Edinburgh, for example, we
have public holidays this year on 1st and 2nd January, 14th April
(Edinburgh Spring Holiday), 5th May (May Day), 19th May (Victoria
Day), 15th September (Edinburgh Autumn Holiday), Christmas Day
and Boxing Day. It wasn’t entirely clear to me whether Good Friday
and Easter Monday were holidays or not

However, banks tend to stick to the English bank holidays. Some
other businesses do that too. Others use the Edinburgh
holidays. Some give their employees a choice: take any 8 days as
long as they are either English or Edinburgh holidays. The problem
for many businesses, especially in the financial services sector,
is that customers from outside Scotland expect them to be around
when their English counterparts are.

===============
6. Newsletter information

This newsletter is issued approximately monthly by Louise Pryor
(http://www.louisepryor.com). Copyright (c) Louise Pryor 2003. You
may distribute it in whole or in part as long as this notice is
included. To subscribe, email news-subscribe@louisepryor.com. To
unsubscribe, email news-unsubscribe@louisepryor.com. All comments,
feedback and other queries to news-admin@louisepryor.com. Archives
at http://www.louisepryor.com/newsArchive.do.

Categories
Notes Old site

Columbia space shuttle

At a press conference on 8th April 2003, Admiral Hal Gehman, Chairman of the Columbia Accident Investigation Board, discussed the model that was used to analyse the impact damage due to debris. If you recall, the prevalent theory at the time was that this was a major cause of the disaster.

He said “It’s a rudimentary kind of model. It’s essentially an Excel spreadsheet with numbers that go down, and it’s not really not a computational model.” The implication seemed to be that computational models and Excel spreadsheets are incompatible.

However, this is not the case. The real problem with the model was not its implementation, but its basic structure. Apparently it’s a lookup table, populated with data from controlled experiments. Unfortunately the piece of debris under consideration is thought to have had a mass of about 1kg, much larger than any of the experimental objects. The trouble with lookup tables is that they are not much good when it comes to extrapolation beyond the limits of the data.

A predictive model would obviously be more computationally complex, but that does not mean that it would not be possible to implement it in Excel. If the financial services industry is anything to go by, computational complexity has never been a reason for avoiding Excel. On the other hand, implementation in Excel might well be inadvisable, because there are few Excel developers who have the software engineering background to build a sufficiently well tested and robust implementation.

Resources

The following external links are relevant:

Categories
Notes Old site

What is a bug?

In computing parlance, unlike normal life, bugs and viruses have nothing to do with each other. A bug is simply a fault, or error, while a virus is a malicious program that propagates from computer to computer by hiding itself inside another program or document.

Legend has it that the term bug was invented by Grace Murray Hopper, a Rear Admiral in the US Navy, who was one of the pioneers of computing. Early computers were huge machines made of relays and valves and wires and so on; compared to today’s sleek laptops or PDAs they were veritable Heath Robinson contraptions. Anyway, they were open to the atmosphere. Hopper tells the story:

Things were going badly; there was something wrong in one of the circuits of the long glass-enclosed computer. Finally, someone located the trouble spot and, using ordinary tweezers, removed the problem, a two-inch moth. From then on, when anything went wrong with a computer, we said it had bugs in it.

Hopper’s team introduced a new term into computing jargon when they said that they had debugged the machine. However, contrary to popular legend, the term bug had been in use since 1878, or even earlier, when Edison used it to refer to a flaw in a system.

Resources

The following external links are relevant:

Categories
Newsletter Old site

Newsletter Mar 2003

News update 2003-03: March 2003
===================

A monthly newsletter on risk management in financial services,
operational risk and user-developed software from Louise Pryor
(http://www.louisepryor.com). Comments and feedback to
news-admin@louisepryor.com. Subscribe by sending an email to
news-subscribe@louisepryor.com. Unsubscribe by sending an email to
news-unsubscribe@louisepryor.com. Newsletter archived at
http://www.louisepryor.com/newsArchive.do.

In this issue:
1. Modelling problem
2. Spreadsheets: why test?
3. FSA update
4. Software versions
5. Newsletter information

===============
1. Modelling problem

On 6th March 2003 Provident Financial Group of Cincinnati announced
a restatement of its results for the five financial years from 1997
to 2002. Between 1997 and 1999 Provident created nine pools of car
leases. Part of the financial restatement was because the leases
were treated off balance sheet, rather than on balance sheet as was
later thought to be appropriate. But there was also a significant
restatement of earnings, because there was a mistake in the model
that calculated the debt amortisation for the leases. It appears
that the analysts who built the model used for the first pool “put
in the wrong value, and they didn’t accrue enough interest expense
over the deal term. The first model that was put together had the
problem, and that got carried through the other eight,” according
to the Chief Financial Officer, who also went on to say that he did
not think other banks had made similar errors. “We made such a
unique mistake here that I think it’s unlikely.”

It appears that the error was found when Provident introduced a new
financial model that was tested against the original, and that the
two models produced different results. They then went back and
looked at the original model to see which one was correct. We don’t
know that these were spreadsheet models, but it’s entirely
possible. And the lack of testing may have led to earned income
being overstated by $70 million over five years. Provident also
faces a class action suit from investors.

If I am right, and the erroneous model was a spreadsheet (and from
the fact that those who built it were referred as “analysts” rather
than “programmers” or “developers” some sort of user-developed
software seems likely), this is a classic example of a spreadsheet
being built as a one-off and then reused without adequate
controls. Later pools must have used a different spreadsheet, as
they were not subject to the same restatement.

The CFO has more confidence than I do in the ability of other banks
to avoid similar errors.

See http://cappigun.notlong.com for the press release from
Provident, and http://www.cincypost.com/2003/03/12/prov031203.html
and http://nitcrish.notlong.com for press coverage from the
Cincinnati Post and New York Times.

===============
2. Spreadsheets: why test?

You should test your spreadsheets because they may contain errors
(see item 1). People fail to test their spreadsheets because they
underestimate the benefits of doing so: mainly, they simply don’t
know how many spreadsheets contain errors. Think about it for a
moment. Do 10% of spreadsheets contain errors? Or 20% (for the
pessimists among you)? These rates are high, and should be enough
to make alarm bells ring, but the actual rates are probably far
higher.

A few years ago Professor Ray Panko, at the University of Hawaii,
pulled together the available evidence from field audits of
spreadsheets. Of the 54 spreadsheets that were audited, 49 had
errors. That’s an error rate of 91%.

Other studies show that the error rate per cell is between 0.38%
and 21%. These results are difficult to interpret: are they
percentages of all cells, cells containing formulae, or unique
formulae? (If a formula is copied down a row or column, it may
count as many formula cells, but is only one unique formula). If we
assume a rate of 1% of unique formulae having errors, and look at
spreadsheets containing from 150 to 350 unique formulae (this is a
fairly typical size in my experience), we find that the probability
of an individual spreadsheet containing an error is between 78% and
97%.

To make matters worse, people tend to overestimate their own
capabilities. Panko describes an experiment in which people were
asked to say whether they thought spreadsheets that they had
developed contained errors. On the basis on their responses, about
18% of the spreadsheets would have been wrong; the true figure was
81%. The actuary who told me “As far as I am concerned, none of my
spreadsheets has ever had a bug in it” was probably deluding
himself.

There’s a great deal of misplaced confidence in the accuracy of
spreadsheets. Another actuary who recently said “Of course, in a 1%
world we can’t afford to test our spreadsheets properly” must have
missed a word out. He should have said that they couldn’t afford
*not* to test their spreadsheets.

Panko’s web page at http://panko.cba.hawaii.edu/ssr/ has loads of
interesting information about spreadsheets, including the error
rates described above. There is further discussion at
http://www.louisepryor.com/showTopic.do?code=errorRates.

===============
3. FSA update

Well, we’re certainly living in interesting times and they are
every bit as interesting in the financial world as elsewhere. The
FSA must be hoping for a bit of boredom, just like the rest of us.

On 26th February Howard Davies gave a speech on “Managing Financial
Crises”. In what might seem a surprising view, he took the line
that we are not in one at the moment. However, his definition of a
crisis is quite specific, and, luckily, does not apply to the
current situation. The speech is at
http://www.fsa.gov.uk/pubs/speeches/sp115.html.

As the markets remain turbulent, but mainly in a downward
direction, we’ve had more details on the flexibility provided by
waivers of the rules for with profits life insurance, in a letter
to CEOs of life companies
(http://www.fsa.gov.uk/pubs/other/ceo_letter_wp.pdf). The letter
contains some general guidance, which was issued without the normal
consultation period because there was a worry that any delay would
not be in the interests of consumers. As every financial
commentator in the land has explained, often more than once, the
problem is that adhering to the letter of the solvency requirements
may force life companies to sell equities into a falling market,
thus both reinforcing the downward direction of the market and,
possibly, going against the most appropriate investment strategy
for the life office. Is this another instance of targets distorting
the quality they are trying to measure?

The feedback on CP142, Operational risk systems and controls is
just out. There are no significant changes as a result of the
feedback.

New consultation and discussion papers out this month:
—————————————————–

CP172 Electronic money: Perimeter guidance
CP173 Amendments to the Interim Prudential sourcebook for
Investment Businesses chapter 5 rules on consolidated
supervision
CP174 Prudential and other requirements for mortgage firms and
insurance intermediaries
CP175 Miscellaneous amendments to the Handbook (No. 7)

DP21 Implementation of the Distance Marketing Directive

Feedback published this month:
—————————–

CP142 Operational risk systems and controls

Current consultations, with dates by which responses should be
received by the FSA, are listed at
http://www.fsa.gov.uk/pubs/2_consultations.html

===============
4. Software versions

I only narrowly averted disaster last week. I was giving a talk,
and had been told that “Powerpoint is standard, from diskette.” So
that was OK. But wait! What version of PowerPoint? It turned out to
be 97, and I had prepared my talk in 2002. Most of the animations
didn’t work, to such an extent that some vital information simply
didn’t appear. Luckily, I discovered this in the comfort of my own
home, using a PowerPoint97 viewer, instead of in front of an eager
audience.

This is actually a big problem with Microsoft Office products. One
of the reasons that they are so widely used is that they are widely
used, and seen as the standard. Unfortunately, although they are
backwards compatible they are not forwards compatible (not
surprising, really). When you buy a new copy you have to buy the
latest version (if you are an individual; you have more choice if
you are a volume licensee); and it’s quite likely that people you
are trying to be compatible with have an older version.

I don’t have any hard evidence, but it seems to me that Office 97
is very widely used, in spite of the fact that there are two more
recent releases (2000, and 2002 aka XP). This means that there is
no effective standard: people with recent versions may use features
that aren’t available in 97.

Maybe the feature bloat of which Microsoft is often accused has
even worse effects than we thought. Office 97 clearly has enough
features for many people, and the extra features in later versions
are worse than useless as they are positively harmful if you want
compatibility.

===============
5. Newsletter information

This newsletter is issued approximately monthly by Louise Pryor
(http://www.louisepryor.com). Copyright (c) Louise Pryor 2003. You
may distribute it in whole or in part as long as this notice is
included. To subscribe, email news-subscribe@louisepryor.com. To
unsubscribe, email news-unsubscribe@louisepryor.com. All comments,
feedback and other queries to news-admin@louisepryor.com. Archives
at http://www.louisepryor.com/newsArchive.do.

Categories
Notes Old site

Spreadsheet error rates

Think about it for a moment. Do 10% of spreadsheets contain errors? Or 20% (for the pessimists among you)? These rates are high, and should be enough to make alarm bells ring, but the actual rates are probably far higher.

A few years ago Professor Ray Panko, at the University of Hawaii, pulled together the available evidence from field audits of spreadsheets. These are the results he shows:

Study Number of
spreadsheets
Number with
errors
Percentage
with errors
Coopers & Lybrand, 1997 23 21 91%
KPMG, 1997 22 20 91%
Lukasic, 1998 2 2 100%
Butler (HMCE), 2000 7 6 86%
Total 54 49 91%

More recently Lawrence and Lee analysed 30 project financing spreadsheets. All 30 had errors; the error rate was 100%.

It’s difficult to know how to interpret these results. They are certainly very high numbers, and send a chill down my spine. However, in terms of all the spreadsheets out there in the world, these error rates may be:

Understated
because not all the errors were caught in the audit. Spreadsheet reviewers and auditors are subject to human error like the rest of us, and depending on how long they spent on the audit may well have missed some of the errors.
because the sample of spreadsheets chosen for audit was biased. Possibly only those that were considered to be most important, and over which the greatest care had been taken, were selected.
Overstated
because the sample of spreadsheets chosen for audit was biased. Possibly only those that were considered to be most likely to have errors in were selected.
Not comparable
because different definitions of significant errors were used in the different studies.

Cell error rates

Other studies surveyed by Panko show that the error rate per cell is between 0.38% and 21%. These results are difficult to interpret: are they percentages of all cells, cells containing formulae, or unique formulae? (If a formula is copied down a row or column, it may count as many formula cells, but is only one unique formula). If we assume a rate of 1% of unique formulae having errors, and look at spreadsheets containing from 150 to 350 unique formulae, we find that the probability of an individual spreadsheet containing an error is between 78% and 97%. This is (obviously) a high number, but is reasonably consistent with the field audit results discussed above.

Lawrence and Lee found that 30% of the spreadsheets they reviewed had errors in over 10% of unique formulae; one spreadsheet had errors in more than one in five of unique formulae. Interestingly, this was the smallest spreadsheet, showing that error rates don’t necessarily increase with complexity.

Self confidence

To make matters worse, people tend to overestimate their own capabilities. Panko describes an experiment in which people were asked to say whether they thought spreadsheets that they had developed contained errors. On the basis on their responses, about 18% of the spreadsheets would have been wrong; the true figure was 81%. The actuary who told me “As far as I am concerned, none of my spreadsheets has ever had a bug in it” was probably deluding himself.

One source of this over confidence is probably lack of testing and thorough review. If you don’t think that your spreadsheet has errors in, you may not bother testing it, and so never find the errors. Nothing ever happens to make you revise your view.

Summary

It’s extremely likely that a large proportion of spreadsheets contain errors. People don’t realise just how large that proportion is, and also have misplaced confidence in their own spreadsheets.

Categories
Notes Old site

Provident Financial modelling problem

On 6th March 2003 Provident Financial Group of Cincinnati announced a restatement of its results for the five financial years from 1997 to 2002. Between 1997 and 1999 Provident created nine pools of car leases. Part of the financial restatement was because the leases were treated off balance sheet, rather than on balance sheet as was later thought to be appropriate. But there was also a significant restatement of earnings, because there was a mistake in the model that calculated the debt amortisation for the leases. It appears that the analysts who built the model used for the first pool “put in the wrong value, and they didn’t accrue enough interest expense over the deal term. The first model that was put together had the problem, and that got carried through the other eight,” according to the Chief Financial Officer, who also went on to say that he did not think other banks had made similar errors. “We made such a unique mistake here that I think it’s unlikely.”

It appears that the error was found when Provident introduced a new financial model that was tested against the original, and that the two models produced different results. They then went back and looked at the original model to see which one was correct. We don’t know that these were spreadsheet models, but it’s entirely possible. And the lack of testing may have led to earned income being overstated by $70 million over five years. Provident also faces a class action suit from investors.

If I am right, and the erroneous model was a spreadsheet (and from the fact that those who built it were referred as “analysts” rather than “programmers” or “developers” some sort of user-developed software seems likely), this is a classic example of a spreadsheet being built as a one-off and then reused without adequate controls. Later pools must have used a different spreadsheet, as they were not subject to the same restatement.

The CFO has more confidence than I do in the ability of other banks to avoid similar errors.

See the press release from Provident, and press coverage from the Cincinnati Post and New York Times.

Categories
Newsletter Old site

Newsletter Feb 2003

News update 2003-02: February 2003
===================

A monthly newsletter on risk management in financial services,
operational risk and user-developed software from Louise Pryor
(http://www.louisepryor.com). Comments and feedback to
news-admin@louisepryor.com. Unsubscribe by sending an email to
news-unsubscribe@louisepryor.com. Newsletter archived at
http://www.louisepryor.com/newsArchive.do.

In this issue:
1. Business continuity 1: GAO report
2. Business continuity 2: FSA and tripartite web site
3. Worms and patches
4. FSA update
5. Business continuity 3: Testing
6. Newsletter information

===============
1. Business continuity 1: GAO report

Just the other week the GAO released a report entitled “Potential
Terrorist Attacks: Additional Actions Needed to Better Prepare
Critical Financial Market Participants”. The GAO is the United
States General Accounting Office, and is the investigative arm of
Congress.

The report is a very interesting read. It analyses the effects of
the World Trade Center attacks, and describes the steps that were
taken to resume normal operations in the financial markets
afterwards. It then presents the results of a review of the
business continuity plans of 15 organisations that undertake
trading or clearing. Although many lessons were learned from the
WTC attacks, some of them have not been taken to heart: in a number
of cases, backup sites are close to the sites they would replace,
and the problem of non-availability of key staff has not been
addressed.

Some of the specific points in the report are irrelevant to
organisations whose business is not in trading or clearing, but
many of the lessons are more general and are useful to everyone.

The GAO report is available at
http://www.gao.gov/daybook/030212.htm

===============
2. Business continuity 2: FSA and tripartite web site

Coincidentally, the same week also saw the second Annual Conference
for Business Continuity and Disaster Recovery in the Financial
Services Sector. One of the speakers was Michael Foot, of the
FSA. He emphasised the importance of business continuity
arrangements, both for the continuing operations of trading and
clearing and for individual firms. The press release describing
Michael Foot’s remarks is at
http://www.fsa.gov.uk/pubs/press/2003/021.html.

There are two sources of guidance from the FSA on business
continuity. CP142, Operational risk systems and controls, was
issued in July 2002, with comments due by 31 October 2002. The
response to the consultation is expected in a few weeks. CP142 is
at http://www.fsa.gov.uk/pubs/cp/142/.

The FSA, together with the Bank of England and HM Treasury, are
joint sponsors of the tripartite web site on on UK Financial Sector
Continuity planning, at
http://www.financialsectorcontinuity.gov.uk/home/. The site
includes an FSA report: A Review of Business Continuity Management
in Major Financial Groups Post 11 September 2001. It was published
in September 2002.

Some of the points in this report reflect general themes of the
FSA’s risk-based approach to regulation. It is always important to
remember that the risks that the FSA has in mind are the risks to
its objectives (RTOs). In this case, the objectives under threat
are those referring to market confidence and consumer
protection. The FSA is not worried by business failure as such,
unless its objectives are threatened. This means that their
guidance should not be seen as comprehensive.

Another theme that emerges strongly is the need for senior
management to take responsibility. We see this over and over again
in reports and guidance issued by the FSA, and it’s an issue that
should be taken seriously.

From my point of view it was interesting to see specific questions
in the review about whether and how critical bespoke applications/
spreadsheets/databases are identified and included in IT disaster
recovery plans. Given the number of mission-critical functions that
use spreadsheets and other user-developed systems, this is clearly
vital. It should also be considered as part of the regular back-up
strategy.

One of the most useful parts of the report is the BCM risk matrix
on pages 17 to 32. It summarises the critical issues and risk
factors, together with observed standard and good practice.

And, finally on this topic, the FSA’s report on the Financial Risk
Outlook for 2003 identifies the threat of a major terrorist attack
on London or another financial centre, and the need for firms to
have adequate business continuity arrangements in place, as a
priority risk. The report was issued in January 2003, and is at
http://www.fsa.gov.uk/pubs/plan/financial_risk_outlook_2003.pdf.

===============
3. Worms and patches

If you were trying to surf the web at any time during the last
weekend in January you may have found the process unbearably
slow. This was because of a computer worm (or virus; descriptions
vary) that attacked a Microsoft SQLServer vulnerability. The
vulnerability was not new, and Microsoft had already issued a patch
for it. Ironically, some of Microsoft’s own servers were affected
by the worm; they hadn’t installed the patch.

The issue of software patches is an important one. First, it’s
difficult to keep on top of all the patches that are
released. Second, it’s often a painful process installing
them. They may have poor documentation and confusing instructions,
and there are often complex rules about whether the patch is
applicable or not. Third, sometimes installing a patch can stop
other things working.

So saying “we released a patch; we’re not to blame” is not
enough. Better not to have the problem in the first place than to
patch it later.

By the way, this applies at the more lowly level of spreadsheets
and other user-developed software too. Keep the bugs out in the
first place; don’t rely on issuing revised versions and expecting
all the users to update. They won’t.

===============
4. FSA update

The FSA has announced that it intends to review all insurance
companies (with the exception of low impact firms) by the end of
March 2003. This is an acceleration of the timetable. In many cases
this will be a desk-based review; in other words, no-one from the
FSA will actually visit the firm under review, but they will sit at
their desks and ask for information. This is all part of the
general worry about insurance companies at the moment, because of
the “particular stress” they are under as a result of the current
market conditions.

The announcement refers to a new document, “The firm risk
assessment framework”, that was published only a week later. This
is essential reading for anyone who will be involved in a review by
the FSA. It describes how the review process works, with plenty of
helpful examples. This is the latest document in the “Building the
new Regulator series”, and builds on and clarifies the earlier
documents in the series. It is available at
http://www.fsa.gov.uk/pubs/policy/bnr_firm-framework.pdf. See also
http://www.louisepryor.com/showTopic.do?topic=30 for a brief
description of the ARROW risk assessment framework.

New consultation and discussion papers out this month:
—————————————————–

CP166 Reforming Polarisation: Removing the barriers to choice –
Including feedback on CP121
CP167 With-profits governance, the role of actuaries in life
insurers, and certification of insurance returns
CP168 Fees 2003/4
CP169 Professional Indemnity Insurance for personal investment
firms – consultation on rule changes; and discussion of
other policy options
CP170 Informing consumers: product disclosure at the point of
sale
CP171 Conflicts of Interest: Investment Research and Issues
of Securities

DP20 Issues for with-profits business arising from the Sandler
Review

Feedback published this month:
—————————–

CP138 Disclosure of status under the Financial Services and
Markets Act 2000 and use of the FSA logo
CP158 Mortgage endowment complaints: Changes to time limits for
making a complaint

DP14 Review of the Listing Regime

Current consultations, with dates by which responses should be
received by the FSA, are listed at
http://www.fsa.gov.uk/pubs/2_consultations.html

===============
5. Business continuity 3: Testing

Of course it’s not only in cases of huge natural disasters that
business continuity becomes an issue. Minor natural glitches play
their part too, and the recent snow in the South East provided some
real life testing for some people. The main problem, apparently
(there were no problems in Edinburgh, but then there was no snow
either), was that the roads and railways weren’t working. In any
large scale disaster this is likely to be a problem too (see the
GAO report referred to in item 1). The FSA report discussed in item
2 stresses the importance of testing business continuity plans.

One of my informants used the opportunity to check out the
contingency plan of working at home. He told me that the vital
facilities were: a high speed internet connection so he could read
his email, a speaker phone or headset for those phone meetings, and
a good hill with plenty of snow for the toboggan. Most financial
institutions would probably agree on the need for first two.

State Street has recently announced that it is locating its
European disaster recovery site here in Edinburgh, presumably
because of the lack of snow.

===============
6. Newsletter information

This newsletter is issued approximately monthly by Louise Pryor
(http://www.louisepryor.com). Copyright (c) Louise Pryor 2003. You
may distribute it in whole or in part as long as this notice is
included. To subscribe, email news-subscribe@louisepryor.com. To
unsubscribe, email news-unsubscribe@louisepryor.com. All comments,
feedback and other queries to news-admin@louisepryor.com. Archives
at http://www.louisepryor.com/newsArchive.do.

Categories
Notes Old site

ARROW risk assessment framework

The FSA has developed the ARROW risk assessment framework with the following objectives:

  • Help FSA meet its statutory objectives by focusing on key risks
  • Influence resource allocation to make efficient and effective use of limited resources
  • Use appropriate regulatory tools to deal with risks or issues
  • Undertake proportionally more work on a thematic (or cross-sectional) basis

ARROW stands for Advanced Risk Response Operating frameWork: a bit contrived, but we get the picture.

Firms are assigned to one of four supervision categories, based on the risk they pose to the FSA’s objectives, as perceived by the FSA. The ARROW framework describes how the FSA assesses the risk. Although the requirements are the same for all firms, the level of the FSA’s involvement depends on the supervision category. Firms in category A can expect a close and continuous relationship; those in category D can expect little or no individual contact.

An extremely important aspect of the whole regulatory approach of the FSA is that only the risks to the FSA’s objectives are considered. These objectives are concerned with market confidence, public awareness, consumer protection and the reduction of financial crime. Risks to shareholder value, for example, do not explicitly concern the FSA.

The FSA assesses the risk that a firm poses to its objectives by considering the impact and
probability separately. The unit of assessment may be the individual firm, or a business unit consisting of several firms (in large groups) or within a firm.

The impact assessment depends on the size of the firm, and is expressed as high, medium high, medium low, or low. The size of the firm is measured by premium income, assets/liabilities, funds under management, annual turnover, or other similar measures, depending on the firm’s sector.

The probability assessment is performed on a firm by firm basis, by considering each element in a matrix of risks. The thoroughness of the probability assessment depends on the impact rating of the firm. Low impact firms won’t be assessed individually; high impact firms will be assessed in great detail, with visits from the FSA; those in the middle will get desk-based assessments.

After performing the probability assessment, the FSA develops a risk mitigation programme (RMP) for the firm. The RMP will use a selection of regulatory tools intended to reduce the risks that have been flagged as requiring action. Usually, this means that the firm has to take some action: produce and implement a plan for introducing a risk management process, for example.