Willyam Bradberry
1 February 2017Insurance

Why robots will one day rule risk transfer

While automation and the idea that a machine could take a person’s job are certainly not new concepts, recent developments in artificial intelligence (AI)—in particular the development of so-called ‘self-learning’ machines—have changed the ballpark completely.

This leap forward in machine intelligence has moved the debate away from machines doing manual jobs in place of humans and into a whole new sphere of possibilities and sophistication. For the risk transfer business, it raises many new possibilities which are either opportunities or threats depending on your perspective and your current role in an organisation.

In some ways, this fear is nothing new. A 2013 report published by Oxford University, The Future of Employment, suggested that about 47 percent of total US employment is at risk, based on the probability of computerisation for 702 detailed occupations.

A Bank of England study published in 2015, Labour’s Share, suggested that up to 15 million jobs were at risk of being replaced by a robot. These reports complement many other studies and pieces of research that reach similar conclusions.

In the last 12 months, the potential impact of AI on the re/insurance industry suddenly got very real, and the challenges crystallised when, at the start of 2017, Japan-based insurance company Fukoku Life Mutual found itself at the centre of the AI debate having invested in new technology which, it was alleged, would eventually replace more than 30 of its employees.

Fukoku’s new AI system is based on IBM’s Watson Explorer, which is alleged to possess “cognitive technology that can think like a human”, and is purposed to calculate payouts to policyholders.

The speed of robots

The use of AI is not unknown in the re/insurance industry, and many insuretech companies and startups have made headlines, offering a faster, more convenient way to do business.

This can occasionally steal headlines in other ways. Peer-to-peer insurer Lemonade claimed it had taken the record for the fastest insurance claim—payment after only three seconds—thanks to the use of AI it uses to communicate through a mobile platform with its customers.

“There are a few concerns surrounding security—privacy, confidentiality of individual-level and company-level data—and also any bias behind the decision-making of the machine itself.” Anand Rao, Pwc.

In this three-second window, Lemonade claims, robot ‘AI Jim’ had reviewed the claim, cross-referenced it with the policy, run 18 anti-fraud algorithms on it, approved the claim, sent wiring instructions to the bank, and informed the customer that the claim was closed.

The Fukoku and Lemonade examples both make one thing very clear: that AI can complete tasks more quickly and, therefore, increase productivity and cut costs.

“Over time, as AI systems learn from their interactions with the environment and with their human masters, they are likely to become more effective than humans and replace them,” PwC wrote in a 2016 report, AI in Insurance: Hype or Reality?

The report added that just as during the first machine age, the Industrial Revolution, the automation of physical work occurred, we are now living in the second machine age, where there is increasing augmentation and automation of manual and cognitive work.

Fearful of the future

Although some in the re/insurance industry may see AI as a potential threat, an overwhelming majority see this phenomenon as an opportunity to become more creative, according to an Intelligent Insurer online survey carried out in January.

The poll showed that 89 percent of readers see AI as an opportunity rather than a threat to their jobs or businesses.

The automation of routine tasks and streamlining of the underwriting process were commonly cited as an opportunity to allow more creativity within the industry, and would also allow more sophisticated analyses that are not currently possible due to the constraints on resources.

Even if the more routine tasks were at risk, some respondents dismissed any fears that AI could replace the more technical and analytical roles.

Roles that require a more personal touch were also not seen as under threat, for example, broking.

In fact, the lack of human interaction was cited as one of the bigger threats that AI poses to the industry, with customers likely to appreciate being given the chance to talk to a real person.

With regard to claims being processed by AI, some felt this would end up with insureds believing their claim was judged unfairly or treated in a uniform fashion, possibly resulting in a feeling of being cheated or misunderstood by the robot.

“AI is a risk to insurers only if they are not acting on it, but it’s a huge opportunity if they are exploiting it as a tool,” says Anand Rao, innovation lead at PwC Analytics Group.

Largely affirming some of the viewpoints put forward in the survey, Rao agrees that AI will be able to fill in for some of the more routine tasks, and would allow more analysis and creativity in the industry.

“It is a virtuous cycle. As workers are released from doing more routine stuff, they are able to start doing more ‘intelligent’ or ‘value-added’ work rather than just pushing paper,” he says.

In the past six or seven years, AI has increasingly been making its way into the commercial world, with aerospace, defence and intelligence as the main users.

Rao says that although a number of areas have matured and embraced AI, it doesn’t really matter which sector it is implemented in, as there are two kinds of jobs that are essentially going to be displaced: repetitive manual labour and repetitive cognitive tasks.

“AI will fill in for the more risky repetitive manual labour environments, for example bomb disposal, or working in areas such as nuclear plants or earthquake stricken areas,” says Rao.

Agriculture and manufacturing will also see more displacement, he says, when the latter implements more flexible robots than those already in place.

At the other end of the spectrum are the cognitive tasks such as decision-making. The robot learns the decision-making aspects of the cognitive tasks which can then be automated because the robot has learnt how to do it, Rao explains.

“If we understand how to do it, and it’s pretty routine and repeatable, then that will be automated. Fukoku and Lemonade have both gone along those lines.”

Going one step further, the cognitive tasks need not be very repetitive: a robot can be configured to capture and interpret existing applications for processing a transaction, manipulate data, respond to triggers and communicate with other devices, which is known as robotic process automation (RPA).

As long as there is a lot of data which has been utilised in the past, the robot can fill this space through machine learning, Rao says.

The areas of insurance Rao sees becoming automated include operations, underwriting, pricing, and claims.

“Underwriting is a classic example. If you have done it in the past, you have made actuarial projections—which insurers have done for a number of decades—and you have the data. Now, with the data—even if it is a decade long—comes the machine learning aspects, where you can start teaching the machine underwriting.”

This is not restricted to the personal side, which Rao says has been automated already by many players, but is also cropping up in the commercial space, where automated algorithms can look at the history and learn from it.

Augmentation of reality

One of the concerns with the introduction of AI and the streamlining of processes is that the lack of human interaction may hurt these processes, particularly from the customer’s point of view.

Customers may not appreciate being painted with the same brush, or potentially being misunderstood or treated unfairly due to the robot.

“This is where it’s important to distinguish automation from augmentation,” Rao says.

Augmentation is where the agent or the adviser is using the AI tool, and it may not be experienced directly by the consumer.

“Depending on the type of consumer, they may or may not be able to use it. Or even if they are comfortable using it, they may not want to take a decision, and that’s where it is better to mediate it through the agent or adviser,” he says.

Instead of agents having to understand everything, they will now have a tool that makes it easier for them to mediate, according to Rao.

“With those kinds of tools they will be able to serve more people, given the time, which means you need fewer agents to deliver the same level of insurance, and the same level of coverage,” he says.

This is where the ‘value-added’ work can now come in, Rao says, which allows the human more time to invest in gaining insight into why they should underwrite a certain person with certain characteristics, which they can teach to the machine.

“There is a notion we have started seeing in some areas, where you automate what is known so far, and the human makes certain decisions. Then the human is teaching the machine and the machine gets smarter. So the machine gets smarter and the human gets smarter.”

The human ‘getting smarter’ is in the context of being unbound from the more routine tasks and having more time for analyses.

Rao suggests that around 30 to 50 percent of the jobs we know today will disappear in the next 10 years, but asserts that newer jobs will come in, that will provide an opportunity to get smarter and add value to new services.

There are a few concerns, however, surrounding security—privacy, confidentiality of individual-level and company-level data—and also any bias behind the decision-making of the machine itself.

The US and EU are concerned about potential bias coming from the data a machine is fed, suggests Rao, as the data may be built on a backlog of data that spans over 10 years. This may have been formed with a select client base in mind, which in turn may mean the system could be biased towards picking the same type of customers and discriminating against others, he says.

It’s an issue that is hard to avoid, especially as machines don’t have human judgement, says Rao.

Rao’s second concern with AI, which he believes the US has taken the lead with, is that the decisions made by AI have to be ‘explainable’ to a customer.

“If an insurer rejects you and says ‘no, we’re not giving you insurance at this preferred rate’, then I’d better be able to explain it. Not being able to explain it can limit your standing in a court of law. You want to be able to prove a person was not discriminated against and then be able to explain the factors as to why.”

Already registered?

Login to your account

To request a FREE 2-week trial subscription, please signup.
NOTE - this can take up to 48hrs to be approved.

Two Weeks Free Trial

For multi-user price options, or to check if your company has an existing subscription that we can add you to for FREE, please email Elliot Field at or Adrian Tapping at

More on this story

31 January 2017   Tractable, a London-based start-up offering an automated audit claims product, is now launching in Europe, following a partnership with North America.
18 January 2017   Although there is a fear that the advent of artificial intelligence (AI) may result in an increasingly automated world with job replacements a possibility, an overwhelming majority of re/insurance executives see this phenomenon as an opportunity for their industry, according to an online survey of Intelligent Insurer readers.
1 February 2017   Recent developments in artificial intelligence (AI) in the re/insurance industry in 2017, such as the alleged job losses at Fukoku Life Mutual and the rise of AI in the insurtech space, has sparked a series of discussions surrounding both the potential and concerns arising from this phenomenon.