Loss estimates for hurricane Laura are diverse and vague. Steve Smith, director, insurance product and modelling for QOMPLX, says he knows why – and what to do about it.
The loss estimates for hurricane Laura reveal a problem for the re/insurance industry: they are so wildly diverse that it’s easy to start questioning their value. Not only is there huge disparity between different modelling companies’ estimates – with AIR coming in a $4-8 billion, for example, while RMS sets the losses at $9-$13 billion; the estimates also often cover such a wide range that you could fit another hurricane’s worth of losses within the given range.
Dr Steve Smith, director, insurance product and modelling at QOMPLX says his team have investigated the problem, identifying the causes and several solutions.
“The problem comes down how the models are built, the paucity of data that goes into them and the uncertainty around the characteristics of the loss,” he says. “While the characteristics of the event are well understood - we have a very solid data set within a few days of an event like Laura - there is uncertainty around how that hazard translates into a loss.”
To begin with the data problem, Smith believes that while the re/insurance industry used to lead the field in terms of data gathering, it is now far outstripped by other industries. Crucial finer details are often missed when writing policies: for example, a person may insure a property but have a mailing address five miles away, which gets listed as the risk address; and even when the risk location is correctly understood, other key information is often omitted – for example, the elevation of the property, the landscape that surrounds it, and whether part of the property is underground. All these factors could significantly impact how a property is affected by an event such as a hurricane or flood.
“Every building is as different as a snowflake, and capturing those details is very important when looking at loses for a particular event,” says Smith. “We don’t have enough augmented data around the risks that we have, and so when you overlay the hazard and the risk to estimate the actual damage, that is often also flawed - because the methodology is generic, not specific.”
He believes part of this solution is to educate insurance agents: “It’s about making sure they understand why it is important to capture lots of data at the point of contact with the insurance buyer,” he says.
Capturing the information is one thing; conveying it clearly between various parties within the industry is another. One issue QOMPLX is currently addressing is the way in which information in insurance contracts is codified: the aim is to create a universal language for defining risk to ensure nothing gets lost in translation. This Contract Definition Language (CDL) will help to ensure that everyone involved in an insurance contract is on the same page.
“It’s not just capturing and recording data that’s important – it’s how we talk about that data to each other when we transfer it up the chain,” he says. “It’s all very well for the agent to capture all they can about a house when they write the initial policy but when they are sending it up to the insurance company and not sending it in a way that the insurance company can read, it’s a waste of time.
“If you look at the way a policy is codified and structured, there are currently millions of ways of doing that and millions of schema for how you talk about insurance and how you describe a limit – and none of those talk to each other very well. Our concept is we don’t need another schema, we need a language so that you can codify the risk characteristics. This will be especially useful when transferring data. A language that covers everything and that everyone can speak will mean that everyone understands what this risk is, and there will be no loss of fidelity when information is transferred.”
QOMPLX’s vision is that the CDL will be an open source resource that comes with other useful features such as conversion tools. Using the CDL as a base, QOMPLX’s modelling platforms (Q:HELM for catastrophe modelling and QSIM for agent based modelling) will be used to simulate the multidimensional impacts from major catastrophic events, resulting in more accurate and precise loss estimates.
This is one of several initiatives QOMPLX is working on that could help dispel some of the vagaries that currently surround loss estimates. Another is an enhanced real estate database that will utilise mortgage data to gain a more granular understanding of the risk profile of each property.
As well as making use of the rich amount of data mortgage companies collect about properties, it will also factor in valuable information about the policyholder. It will be possible to look at their history - whether they make lots of claims, keep their house well renovated, or clear snow (a slip hazard) from their driveway in winter, for example.
“These are things insurance companies should want to know, because they are not just insuring the building, they are insuring person that comes along with it,” says Smith. “We think data use is moving towards a concept of not just looking at the data related to the risks but also data relating to the risk owner, so that the insurer can understand the behaviour that drives the losses.”
The creation of the database is high on QOMPLX’s agenda for 2021, along with the development of schema that make it easy to examine the relationship between two sets of data – for example, the risk property and its owner. This is being achieved through knowledge graphs, which make it possible to examine data from different sources and see how they interact. This capability will be offered as a service to clients who want to re-engineer their processes, transforming the way they capture data and finding more effective ways to use that data.
Meanwhile, QOMPLX’s agent-based modelling and simulation platform, QSIM, can be used in conjunction with QOMPLX’s cat modelling platform Q:HELM to understand the chains of cause and effect that are set off by an event such as hurricane Laura. QSIM can predict the response of housing markets, credit markets or supply chains to events such as hurricanes or floods, because it analyses how the actions and interactions of autonomous agents affect a system as a whole.
“It’s very good for things like supply chains, where you have all these dependencies: if one supplier breaks down, how does it affect other suppliers in the chain? This can lead to loss amplification when a storm happens – for example, people can’t get access to wood, prices go up, and so on,” says Smith.
“You can also look at the legal side through agent-based modelling, considering factors such as how litigious people are and whether the courts are going to side with the claimant or the insurer.”
With the promise of more plentiful data on properties and policyholders and better ways to see how it interacts, the combined power of Q:HELM and QSIM, and the prospect of a universal language that will bring greater clarity to policies, QOMPLX is building a compelling solution to the problem posed by the hurricane Laura loss estimates.
“Losses from major cat events are hard to come up with, they are uncertain because of problems with data and modelling, and we think there are ways to better constrain the problem and come up with better loss estimates by using better data, more augmented data and thinking about ways of modelling the problem,” says Smith. A more accurate view of losses could be just around the corner.
data, re/insurance, loss, estimate, problem, hurricane, laura, QOMPLX, modelling, hazard, capturing, platform, litigious, claimant, property