
What insurers must understand before underwriting the next AI-driven catastrophe
With the pace of digital transformation accelerating at unprecedented rates, cyber risk is today unavoidable. This makes a reliable, resilient and credible cyber insurance market a necessity particularly as threats are becoming more significant, complex and sophisticated. Outwitting them has become business critical and a question of whether today’s cyber products, models and strategies are fit for purpose.
More than 250 senior cyber insurance leaders — including carriers, brokers, reinsurers and MGAs — gathered in London on 3–4 February for Intelligent Insurer’s annual Cyber Risk & Insurance Innovation Europe event, sharing practical insights on underwriting, claims, systemic risk and the future of cyber insurance.
Pete Nicoletti, Global CISO – Americas at Check Point and one of more than 65 senior speakers at the event, is a seasoned cybersecurity leader who combines deep technical expertise with strategic executive insight. He is adept at tackling complex security challenges, from advising major corporations to securing global enterprises.
In his keynote on day one of the event (February 3, 2026), Pete Nicoletti unpacked the evolving landscape of AI risk and what insurers must truly understand to navigate it. He explored the real-world consequences of recent AI failures, highlighting where systems, governance, and controls have fallen short — and what this means for underwriting, risk assessment, and resilience.
Drawing on practical examples, Nicoletti identified the critical safeguards, controls, and countermeasures needed to prevent loss, systemic disruption, and potential AI-driven catastrophes. His keynote set out the core themes shaping the future of AI risk in insurance and offered a clear framework for how the market must respond.
Q: If a single AI model failure were exploited simultaneously across thousands of insureds, would you classify that as a cyber event - or an uninsurable systemic catastrophe?
In an era where AI models like large language models are deployed at scale across industries, a single vulnerability - such as a zero-day exploit in a widely used foundation model could cascade into simultaneous failures affecting thousands of organisations, amplifying losses through correlated risks that traditional cyber insurance models, designed for isolated incidents, are ill-equipped to handle. This scenario blurs the line between a standard cyber event, which might involve ransomware or data breaches at a single company, and a systemic catastrophe akin to a financial market crash or natural disaster, where the interdependence of AI ecosystems creates uninsurable aggregation risks.
From Check Point's vantage as a leader in cybersecurity, we urge insurers to reclassify such events by incorporating AI-specific risk understanding (using AI Risk Frameworks) correlations into actuarial assessments, emphasizing the need for diversified model architectures and real-time threat sharing to prevent widespread exploitation, thereby enabling more sustainable coverage for insureds while mitigating the potential for industry-wide insolvency.
Q: Which AI risks today are being silently excluded from underwriting because they’re poorly understood - prompt injection, model poisoning, or autonomous decision errors?
Many cyber insurance policies inadvertently exclude emerging AI risks like prompt injection, where attackers craft inputs to manipulate generative AI outputs and bypass safeguards, model poisoning that corrupts training data to embed long-term biases or backdoors, and autonomous decision errors in AI-driven systems that lead to unintended actions such as erroneous financial trades or safety overrides. These risks are often overlooked in underwriting due to a lack of technical depth in policy language, which focuses on conventional threats like malware rather than AI-specific vectors, resulting in coverage gaps that leave insureds exposed to novel liabilities.
As Check Point's global CISO, I highlight the imperative for insurers to integrate these risks into well known risk frameworks, questionnaires, evidence of countermeasures and audits, advocating for advanced defences like input sanitisation, data integrity checks, and explainable AI to bridge the understanding gap, ensuring policies evolve to reflect the technical realities of AI deployment and reduce silent exclusions that could lead to denied claims.
Q: What controls would have prevented the most expensive AI-related failures we’ve already seen - and how many of those controls are insurers actually validating today?
Historical AI failures, such as the 2018 Uber autonomous vehicle incident or the 2023 Air Canada chatbot mishap leading to erroneous commitments, could have been averted through controls like rigorous adversarial testing to simulate attacks, AI Risk management tools available from Check Point, continuous monitoring for model drift, and human-in-the-loop oversight for high-stakes decisions, yet many insurers validate only a fraction of these - often limited to basic data security without delving into AI governance. This shortfall stems from underwriting processes that don’t consider AI project risks, prioritise checkboxes over in-depth technical validations, missing opportunities to enforce preventive measures that could slash loss ratios.
At Check Point, we champion a shift toward AI Risk Understanding, specific control-based underwriting, where insurers require evidence of implemented safeguards like secure AI pipelines and anomaly detection systems, fostering a proactive ecosystem that not only prevents costly failures but also incentivises insureds to adopt best practices for enhanced insurability.
Q: Are insurers underwriting AI risk based on documented governance - or on marketing claims about ‘responsible AI’ that are rarely tested in production?
Too often, cyber insurers base AI risk assessments on superficial marketing/client claims of "responsible AI" such as ethical guidelines or bias mitigation promises, rather than verifiable, documented governance that includes production-tested controls like audit trails, vulnerability scanning of AI models, and compliance with standards like NIST AI RMF/ISO/MITRE/MIT/EU AI Act. This reliance on assurance theatre exposes policies to unquantified risks, as untested claims that fail to address real-world issues like runtime exploits or supply chain weaknesses.
Drawing from Check Point's expertise in securing AI environments, I recommend that underwriters demand empirical evidence through third-party audits and penetration testing, transitioning from policy statements to enforceable technical validations that ensure robust governance, ultimately leading to more accurate risk understanding and cyber insurance pricing and reduced claim surprises in an AI-driven landscape.
Q: If an AI system autonomously causes financial harm, safety incidents, or regulatory violations - who ultimately owns the loss: the insured, the model provider, or the insurer?
When an autonomous AI system triggers harm - such as algorithmic trading errors causing market losses, biased hiring tools leading to discrimination lawsuits, or self-driving tech resulting in accidents - the liability chain becomes murky, often defaulting to the insured organisation for deployment decisions, while model providers may evade responsibility through disclaimers, leaving insurers to absorb uncovered portions amid ambiguous policy terms. This shared responsibility model exacerbates supply-chain risks, where upstream vulnerabilities in cloud-based AI services propagate downstream without clear accountability. As Check Point's global CISO, I advocate for clarifying ownership through contractual indemnity clauses, joint risk assessments, and insurance endorsements that cover AI-specific liabilities, encouraging all parties to implement layered security like endpoint protection and forensic logging to minimise incidents and streamline claims resolution.
Q: What would an AI-driven loss look like that breaks today’s cyber insurance model - not because of scale, but because of speed?
An AI-driven loss propelled by speed could manifest as a flash exploit in high-frequency trading algorithms, where adversarial inputs trigger erroneous trades in milliseconds, cascading into multimillion-dollar losses before human intervention, or a generative AI system rapidly disseminating disinformation that erodes market trust in seconds, outpacing traditional human time frame incident response timelines.
Such events shatter current cyber insurance models, which assume days-long detection windows for claims processing and recovery, rendering standard forensics and mitigation ineffective against machine-speed propagation. From Check Point's perspective, addressing this requires insurers to adapt with AI-augmented policies that include rapid-response endorsements and automated claim triggers, while companies deploy real-time AI-driven defences like intrusion prevention systems, behavioural analytics and zero-trust AI frameworks to compress response times to machine speed, ensuring the insurance ecosystem keeps pace with the velocity of AI threats.
For more information, reach out to Pete Nicoletti at petern@checkpoint.com Check Point Software: Leader in Cyber Security Solutions
Did you get value from this story? Sign up to our free daily newsletters and get stories like this sent straight to your inbox.
Editor's picks
Editor's picks
More articles
Copyright © intelligentinsurer.com 2024 | Headless Content Management with Blaze
.jpg/r%5Bwidth%5D=320&r%5Bheight%5D=180/72242230-f543-11f0-be7c-0d5bfb90110c-ZurichBeazley_Deal_Shutterstock.webp)