risk management Uncategorized

The big guys don’t always know what they’re doing

You’d think that a really big software company, like Adobe, would know what it’s doing But no. You may have noticed that there was a big data breach: millions of usernames and (encrypted) passwords were stolen. But they were encrypted, so no big deal, right?

Ah. Well. That’s the point. As this article explains, it was indeed the encrypted passwords that were stolen, not the hashes (if this is gobbledygook to you, the article has a very clear explanation of what this means). As the password hints were stolen too, it turns out to be really easy to decrypt many of them.

Now, I am by no means a security expert. And for websites I build nowadays, I use a ready-rolled solution (usually WordPress). But when I wrote things from scratch, even I knew better than to store the encrypted passwords. I may not have used the most secure hacking algorithm, or proper salting, but I didn’t encrypt the passwords.

(HT Bruce Shneier)


Models and modellers

On 1 March I gave the Worshipful Company of Actuaries lecture at Heriot-Watt University. Here’s the abstract:

Being an actuary nowadays is all about modelling, and in this lecture I’ll discuss how we should go about it. We all know that all models are wrong but some are useful – what does this mean in practice? And what have sheep and elephants got to do with it? Along the way I’ll also consider some of the ways in which the actuarial profession is changing now and is likely to change in the future, and what you should do about it.

And here’s what I said.

Notes Old site

Risk identification

It goes without saying that risk identification is vital for effective risk management. In order to manage your risks effectively, you have to know what they are. The really important thing during risk identification is not to miss any risks out. You can decide to ignore some of them at a later stage, after you have assessed them, but they all be included at this stage.

There are a number of different techniques that can be used. The ideal is probably to use a combination, and work with outsiders as well as people who are involved in the business and know it well. That way you can make good use of people’s expertise while reaping the benefits of a fresh viewpoint. Useful techniques include various brainstorming methods as well as systematic inspections and process analysis.

Whatever technique (or techniques) you use, it is important to provide an audit trail so that you can be sure of what happened and that no risks were omitted.

Notes Old site

The FSA and risk based capital

The FSA has published proposals for a new framework for risk-based capital rquirements for both life and non-life insurers. Although the details of the calulations differ, the overall structure is the same for both types. The proposals were issued in July and August 2003; the consulation period ends on 30th November 2003.

General framework

Insurers will be required to hold the higher of:

Minimum Capital Requirement (MCR)
as set out in EU directives
Enhanced Capital Requirement (ECR)
a more risk sensitive calculation specified by the FSA

The ECR calculations are obviously different for life and non-life insurers. However, for both types the calculations make various industry-wide assumptions that may not be met by individual firms, whose risk profiles may be different from the average. The FSA proposes to take these differences into account through the Individual Capital Adequacy Standards (ICAS) mechanism. They say that ICAS will

  • mean that firms will hold capital more appropriate to their business and control risks
  • emphasise the responsibility of senior management for ensuring that firms have adequate financial resources
  • Provide incentives for better risk management

ICAS will operate through Individual Capital Guidance (ICG). The ICG will usually be at or above ECR, and will be affected by whether firms’ risk assessment processes follow all the FSA’s guidance. The ARROW assessments will be a major input.

Although ICG is only guidance, firms will be expected to notify the FSA if capital falls below the ICG level. In addition, firm that fail to meeet the ICG will be expected to set out a plan to restore adequate capital.

Notes Old site

The FSA and operational risk

The FSA has produced several documents that are concerned with operational risk, and others that are concerned with systems and controls.

The FSA sometimes distinguishes between operational risk (as part of business risk) and control risk and sometimes doesn’t. For example, the guidance was originally intended to be part of a separate module, PROR, and was presented as such in CP97. However, the guidance was completely rewritten, and moved into the systems and controls module (SYSC), in CP142.

Further guidance on operational risk is contained in PS97_115, a policy statement issued after feedback on CP97 and CP115, and in PS140, a policy statement issued after feedback on CP140. PS140 applies to insurers, friendly societies, and Lloyd’s.

Operational risk is also mentioned in several of the documents in the “Building a New Regulator” series. These documents set out the overall approach of the FSA, and describe their risk framework and regulatory processes.

A report on how firms are going about the business of introducing operational risk management systems, “Building a framework for operational risk management: the FSA’s observations”, was published in July 2003. It contains useful information on good practices.

The FSA’s new structure for capital requirements, based on the calculated ECR (Enhanced Capital Requirement) which is then modified by the ICG (Individual Capital Guidance), as discussed in CP190 and CP195, means that operational risk will affect the capital that firms need. This will be through the ICG, which although it takes the ECR into account is also influenced by the systems and controls that firms have in place. The FSA say:

The more firms are able to demonstrate that their risk assessment processes capture and quantify all of the issues in our guidance, then the lower we are likely to assess their ICG (and vice versa). This provides an incentive for good risk management.


The following external links are relevant:

Notes Old site

User-developed software

User-developed software is, as its name suggests, software that is developed by users rather than by specialist developers. It includes spreadsheets, parameter driven financial models, personal databases, VB code, and so on.

Caution: user-developed systems may be hazardous to your organization
Davis, 1981

User-developed software has many advantages, and can really leverage the expertise of those users. You, the users, maintain control over the system being developed; you hope for rapid turnaround on modifications; and a whole layer of communication is removed from between the concepts being modelled and the people doing the implementation.

However, the expertise of the user is unlikely to included extensive software engineering experience. Many systems developed by users end up being very large and complex, and are not as easy to maintain and enhance as they should be. Moreover, as they are often not subject to rigorous quality controls, they may contain significant bugs, be hard to use and lack robustness.


The following external links are relevant:

Notes Old site

Operational Risk

Operational risk is gaining an increasingly high profile. In the UK, the Turnbull report recommended that listed companies should manage their operational risk explicitly; and the FSA includes operational risk in its new ARROW framework for risk assessment.

Historically (though the history is admittedly rather short), operational risk has received most attention from the banking industry. This is still evident in much of the published literature; often the authors simply assume that the industry in question is banking, without explicitly saying so. This can be confusing.


The FSA, following Basel, defines operational risk:

Operational risk is the risk of loss, resulting from inadequate or failed internal processes, people and systems or from external events.

This definition gives a reasonable idea of operational risk, but is not detailed enough for operational use. For purposes of risk identification, assessment, control and mitigation the definition must be refined so that it is a clearcut decision as to which risks are included and which are not.

In addition, the final phrase or from external events must be interpreted appropriately for the organization in question. For example, for a general insurance company the losses due to paying out claims for an earthquake should not be counted as an operational loss, whereas the losses due to the destruction of head office by the same earthquake should.


The following external links are relevant:

Notes Old site

Software Risk

Software risk is a form of operational risk: it consists of the risks of using software.

The principal components are:

  • Erroneous results. The software produces the wrong results.
  • No results. The software fails to produce results, or fails to produce them by the time at which they are needed.
  • High costs. The results are accurate, and appear on time, but at a very high cost.

These risks apply whether the software is developed in-house or externally and whether by professional developers or users (see user-developed software. Problems can be caused by:

  • Bugs
  • Usability issues
  • Development delays
  • Misunderstood requirements

among other causes.


The following external links are relevant: