Actuarial Data

The new modelling

Data is the new modelling. That is, it’s where all the sexy stuff is going to be over the next few years. Over the last few years, in the insurance industry at least, modelling has been where its at. Driven largely by Solvency II, a huge amount of effort has gone into building and, now, validating, hugely complex financial models.

But now, in the insurance industry as well as others, data is coming to the fore. After all, what is a model without data? And, as we all know, Garbage In, Garbage Out is one of the fundamental tenets of computing. The FSA has pointed out that data is a key area for the successful introduction of Solvency II and has produced a scoping tool that will help them assess a firm’s data management processes.

And it’s not only Solvency II. At GIRO last week there was an interesting debate over whether telematics will be at the heart of personal motor insurance in ten years’ time. The thing about telematics is that it produces large quantities of data. With the Test Achats case meaning that gender won’t be able to be used as a rating factor, insurers are going to be looking for other ways of coming up with premiums, and other factors they can take into account. The thing about gender, of course, is that it doesn’t take much data. It’s just a single bit in the database. Other rating factors may have more predictive power, but it’s harder to get at them.

We’re seeing this everywhere, though. As computers continue to get more powerful, and data storage gets ever cheaper (how big is the disk drive on your laptop? — even my phone has 16GB), doing things the rough and ready way with only limited data has fewer and fewer advantages. Big data is becoming mainstream: look at Google, for instance. And why did HP buy Autonomy?

You mark my words, a change is gonna come.