Product Liability Panel_Agentic+Generative AI For Insruance Europe 2026.
13 February 2026InsuranceFaye Waters

AI’s great debate: Standalone product or bolt-on

Panellists went toe-to-toe at yesterday’s (Thursday 12 February) Agentic + Generative AI For Insurance Europe 2026 conference, hosted by Intelligent Insurer, over the evolution needed to bridge the gaping protection gap for ever-evolving AI Products. While the consensus was that both liability and regulation pose question marks calling for answers, the panel was divided on how best to tackle risk. 

The ‘Product liability and AI: Crafting Fit-For-Purpose AI Risk Products’ panel, part of the ‘Frontier risks and product innovation - AI & Robotics’ workshop, saw Claire Davey, SVP - head of product innovation and emerging risk at Relm Insurance, Chris Mooree, president of Apollo ibott Commercial, Ed Ventham, co-founder & head of broking at Assured, and George Holevas, SVP, cyber, media & technology practice, specialty UK at Marsh, debate the (still) unanswered questions about how best to tackle the relentless evolution of AI risk as it expands not just across business use but also automation. 

Chris Moore, with a background in autonomous-vehicle insurance, explained that the rigidity of insurance verticals makes identifying AI protection gaps extremely difficult, as underwriters work within their own silos and do not consider AI risk across all lines of business.

Morr said: “We tend to stick within our insurance industry verticals, and we do that at the detriment of our clients, in my opinion. It becomes ‘here are all the marine scenarios, speak to the marine team. Here are all the aviation scenarios, speak to the aviation team. Here are all the liability scenarios, speak to the liability team.’ And that's where you start to have these coverage gaps emerge. Everyone has this huge debate as insurers saying: ‘Is it products liability? Is it cyber liability? Is it auto liability? Is it public or general liability?’ And the truth is, it doesn't need to be any of those things.” 

For this reason, he argued that a standalone wrap-around AI line of business is necessary for the continued development of AI products to ensure that liability is covered across all subsections. 

Claire Davey, however, stressed the need to make cover accessible to clients who need it, and Moore’s approach could create an obstacle to clients whose core exposure isn’t specifically AI, particularly as a purpose-built AI product may price out start-ups and SMEs when it’s not their main focus. 

Davey said: “I think we need to start seeing existing product lines being tailored to address these exposures within existing industry groups. Clients have a core exposure that they still need to purchase for, and they will continue to do that, and AI is just literally one part of it, if it's not their core business purpose.” This mirrored Relm’s approach, following the release of three affirmative AI products back in January 2025.

Countering the point, Moore expressed that bolting on AI definitions onto existing products fails to cover the lifecycle of AI products. He cautioned that big players in the AI game are scaling fast, with sufficient premium on their balance sheets to retain their risk in-house if they cannot find the product they want, considering the potential cost of risk as a business expense.

Further, Moore argued the possibility that if clients begin with a revised cover solution rather than a standalone product, it runs the risk of the AI products outgrowing the coverage and then, having exhausted resources or felt failed by the provider, turning inward to build an in-house team and solution to manage the risk, bypassing the insurer altogether.  He stressed, “If they can’t find the product they want to buy, they will make it themselves.”

Playing devil’s advocate, Ventham argued that some AI use, considering AI under the definition of a computer product – “It is part of your computer system, if you're using it as a client,” – could be covered under existing cyber cover. Adaptation or addition to policy wording could easily begin to cover risk under the definition of a computer output: “We could make it explicitly clear by specifying terminology of things like hallucinations in policy wording.” This, of course, retains a human-in-the-loop aspect, which raises liability questions in and of itself.

Davey countered this point with concerns: “A computer system definition is broad enough to capture perhaps what an AI system is, but I think we need to be thinking about what the insurable risks are, what are the loss scenarios that clients are concerned about? It’s issues like IP infringement and discrimination. If you look at a tech cyber policy, those things are often excluded.” These policies would need to evolve far beyond their current standard to encapsulate the risk, returning to her stance that tailoring existing structures to include AI-specific coverage across existing lines would be more effective at protecting the client. 

Of course, behind these conversations is the question of liability. The panellists agreed that, as underdeveloped regulation fails to keep pace with rapidly evolving technology, there is extensive finger-pointing. While regulation is coming, with reference to the EU AI Act in particular, there is no generally accepted regulation, with Moore pointing out that “most regulatory bodies haven’t even defined what artificial intelligence is. There’s no universal definition,”  which leaves a question mark over responsibility. Is it the product developer, the tech provider, the business, or the end user? Moore believes that, given the lack of regulatory guardrails, the insurance industry needs to step in to set some ethical and moral boundaries around liability by shaping available coverage for risk. 

Did you get value from this story?  Sign up to our free daily newsletters and get stories like this sent straight to your inbox.