18 February 2026Technology

Silent cyber whispers cautionary historic tale to AI risk: Relm

The early days of cyber risk remain a cautionary tale, one that Claire Davey, SVP of product innovation and emerging risk at Relm, is urging insurers to remember when they address AI. 

In the beginning, cyber coverage was written before exposures were fully understood, aggregation risk went largely unmodelled, and by the time the industry grasped the scale of the problem, losses were already embedded across portfolios.

Speaking exclusively ahead of the Intelligent Insurer Agentic and Generative AI For Insurance conference held on February 12, Davey said: “AI should be considered a subset of cyber risk, but one of the problems that the insurance industry has is that it often weds ideas into lines of business and struggles to respond to risks that can transcend those categories. We saw that with cyber. I think we’re going to see it again with AI.”

As AI adoption accelerates across all business sectors, from financial services, healthcare, professional services, and physical products, the risk is no longer theoretical. Yet the insurance industry’s response remains fragmented, with many carriers still treating AI as a future problem rather than a present one.

These tensions between innovation, underwriting discipline and regulatory uncertainty were front and centre at the Agentic and Generative AI for Insurance – EU 2026 workshop: Frontier risks and product innovation - AI & Robotics. Davey, who spoke as a panellist on ‘Product liability and AI: Crafting Fit-For-Purpose AI Risk Products,’ explained that Relm was keen to support the workshop due to its focus on the convergence between AI and risk in everyday applications, and how the insurance industry is – but also could or should be – thinking about approaching those risks in the future.

Prior to the workshop, Davey discussed the still under‑addressed pressures of AI risk with Intelligent Insurer.

Silence around liability is growing

Davey highlighted one of the clearest parallels with early cyber: the market’s current posture on liability. “A lot of insurers are just sitting silent, waiting for the claims to emerge, or waiting for other insurers to make a move, believing that the safest option is to do nothing,” Davey said.

That silence, however, comes at a cost. “That doesn’t help insurers model it. That doesn’t help insurers gain data. It doesn’t help clients have confidence around their coverage.” AI risk does not sit neatly within a single line of business. Relm is witnessing risk spanning cyber, financial lines, casualty, product liability and, increasingly, bodily injury. As with silent cyber, Davey cautions that AI risk creates the potential for unintended exposure to build up across portfolios long before it is visible.

Generative AI and the return of aggregation risk

Concern is rearing its head at generative AI, particularly for models trained on large volumes of licensed material or that produce user-generated content.

“There’s greater worry around generative AI models, both because clients are struggling to get cover, and because we’re concerned about the quite catastrophic, large exposure of them,” Davey said.

Much of the litigation to date has focused on intellectual property infringement, particularly in the US. But the prevalence of settlements means the market is still operating with limited judicial guidance. “We’re already seeing the IP losses come through,” Davey noted, but “there’s a lot of IP settlements prior to judgments, so we’re not seeing case law come through.”

For insurers and reinsurers, that uncertainty makes it difficult to assess severity, frequency and aggregation — challenges that feel uncomfortably familiar to cyber’s early development.

Why AI doesn’t belong in a standalone policy

Despite frequent debate about the need for AI risk coverage, Davey isn’t convinced about its position as a line of business.

“I don’t necessarily think that we need to create a whole new segment of insurance just for AI. What I expect to see is adapted or enhanced versions of existing products and lines of business to provide affirmative coverage and solutions.” 

What Davey highlights is that the challenge is cultural and operational. “We don’t need a team of AI risk insurance experts to just write AI risks. It’s the responsibility of underwriters across affected lines to get to grips with AI and how it really impacts their insurable risk.”

Governance is becoming a proxy for insurability

With a loss history thin or non-existent, Davey explained that Relm, the first insurer dedicated to emerging risk, turns to analogous risks to consider its pricing approach. She explained: “The EU Artificial Intelligence Act is very similarly structured to the General Data Protection Regulation (GDPR) in terms of regulation, fines and penalties. We use data from GDPR cases for cross-comparative analysis to assess AI risk.”

Davey followed up by saying that “we can't know all the answers just by drawing on similar data sets, so our approach has generally been to stay cautious.” And with that, Relm is turning to governance as a key underwriting signal for AI risk.

“We’re looking at the extent of oversight. Are there humans overseeing what’s going on and to what extent? Where is this training data coming from and the licences and the methods that are being used?” Davey explained.

She also pointed to the importance of ongoing testing and transparency. “We want to see regular testing – not just pre-deployment – audits, risk assessments, regulatory readiness.” In this environment, governance is not simply a regulatory box-ticking exercise but a centre pin to how insurers distinguish manageable AI risk from outsized tail exposure.

Regulators are still finding their footing

Another parallel with silent cyber lies in regulatory uncertainty. While regulators have begun to articulate expectations around insurers’ internal use of AI, there is far less clarity about what they expect from insurers underwriting AI risk itself.

Davey expressed concerns that, just as with silent cyber, there is a risk that regulators may “wake up a bit too late.” Without early engagement, the industry risks missing a critical window to gather data, shape standards and avoid retrospective interventions driven by loss events.

Looking beyond today’s IP claims

While IP litigation dominates the current narrative, Davey believes the most disruptive losses may still lie ahead, citing bodily injury and wrongful death claims, autonomous systems, and AI’s interaction with children and other vulnerable users as key areas of concern. 

In other words, AI is not just a technology risk. It is a systemic risk that challenges how insurance has historically been structured, priced, and regulated, and underwriters across all affected business lines need to wise up.

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.