
What insurers must understand about AI-driven catastrophes
The Cyber Risk & Insurance Innovation USA conference, being held in Chicago on April 21-22, 2026, hosted by Intelligent Insurer, brings together stakeholders in cyber insurance for informative discussions that will delve into today’s key industry topics including the threat landscape, war exclusions, systemic risk and market conditions, while tackling how to navigate the SME sector, the legal maze reshaping cyber insurance, the game-changing role of AI and the power of data in risk analysis. Over 250 attendees will deliver 12 plus hours of learning to senior insurance professionals at this key industry event.
Tony Sabaj (pictured) is Head of Channel Security Engineering for the Americas at Check Point and a member of the Office of the CTO, bringing over 30 years of experience in cybersecurity. Since joining Check Point in 2002, he has held various sales and technical roles, including founding the North American channel team. Previously, he was a Senior Product Manager at Telenisus, an MSSP/VAR, and later joined Forsythe as a Security Consultant after it acquired Telenisus’s VAR business. He began his career at Arthur Andersen, where he built the firm’s global IP network and helped establish its security consulting practice.
On Tuesday, 21st April, Sabaj will deliver his session on the risks, countermeasures, and consequences of artificial intelligence, offering critical insights for insurers preparing to underwrite the next AI-driven catastrophe. He will emphasise that AI usage is now unavoidable in modern workplaces, as it is embedded into many everyday applications. Tony Sabaj will then explore the unique security risks posed by AI, highlighting the unprecedented speed at which the technology evolves. He will also address the ongoing threat of AI ‘hallucinations’, where systems can make autonomous decisions based on their training data. Sabaj outlines the key themes that form his keynote.
What is the single biggest misconception businesses have about AI risk right now?
While the industry often worries about data leakage, I believe the biggest risk is misuse - specifically, the harmful content AI can generate when used for the wrong purposes. A terrifying example is Grok, which was reportedly used to create child sexual abuse imagery. This highlights the dangers of AI being misused to produce harmful content.
From an insurance perspective, this raises difficult questions about legal responsibility. If a business instructs or enables the AI, is it liable for the outcome? Another illustrative case is the abuse of airline AI chatbots. In one instance, a passenger whose flight was cancelled, persuaded the bot to issue a free ticket for another flight. When the airline refused to honour it, the ruling found that the chatbot was an agent of the organisation - meaning the airline had to stand by the AI's decision.
These examples show the real dangers of AI misuse. If this kind of abuse scales, the financial repercussions for businesses could be devastating.
What recent real-world AI failure should serve as a wake-up call for the insurance industry, and why?
In addition to those examples, there was a case where an AI agent used for coding purposes, deleted a user's entire hard drive. This is what's known as an 'AI hallucination' - meaning the AI effectively goes off the rails and starts to make its own decisions. If that type of incident were to occur in a cloud computing environment at a larger scale, the implications would be enormous, with a potential worldwide ripple effect. A lot of people rely on AI, but it's not always correct and doesn't always reach the right conclusions.
When evaluating a customer's AI usage, what's the first red flag you look for? Have you been pleasantly surprised by a customer's secure AI readiness?
From both a cybersecurity and cyber insurance perspective, employees are using AI every day, whether it's ChatGPT, Microsoft Copilot, or AI agents built into platforms like Salesforce. This raises an important governance debate. What safeguarding measures and controls are employers putting in place to ensure their people aren't misusing these everyday AI tools? How are they making sure that intellectual property or personally identifiable information isn't being leaked into a public system?
The key is finding a way to prevent AI from doing things you don't want it to do - like sharing information that shouldn't be shared. Returning to the medical world as an example - if an AI is drawing from a massive database to curate responses, how do you ensure it doesn't share an individual's private medical information with someone who isn't authorised to see it?
At its core, AI is just an application, and it needs to be built on zero trust models. What strategies are in place to contain it when it goes off the rails? And then there's the constant threat of AI hackers - a challenge that's always evolving.
We're in a soft cyber insurance market? Policyholders and applications have choices when it comes to insurers. How might insurers guide clients toward better cybersecurity practices without compromising the deal?
The first step is to understand what AI people are actually using. Most organisations don't have a complete picture of the AI tools in use across their business. When asked, they might list around 10% of what's actually being utilised but the real number is almost always far greater. Without fully understanding all the AI that employees are using, it's nearly impossible to put meaningful controls and effective policies in place.
Many employers will say, for example, "We only allow our employees to use Copilot," but they often don't realise that other AI-enabled applications are also being used. AI is built into so many platforms - there's AI in your phone, even if you don't actively use it. Businesses often buy AI off the shelf without questioning where the data goes, and just as importantly, what data their AI has access to and what regulations apply to that data.
When someone asks, "What data does your AI have access to, and what are the regulations around that data?" they're asking two critical questions about safety and compliance.
First, the scope - what specific information is the AI system reading or using? Is it looking at public records, or is it accessing private customer medical records?
Second, the guardrails - what laws and rules are in place to ensure that sensitive data remains private and secure, even though an AI is reading it? Regulations like HIPAA, which protects patient health information, and PCI DSS, which governs credit card data security, are key examples.
Ultimately, AI systems should be built with the same understanding we apply to humans - both are unpredictable. That's why, in my view, autonomous cars will never fully take off. The concept only works if every vehicle is also self-driven and able to communicate with one another. What makes them dangerous is the unpredictability of human drivers.
If an AI-driven cyber event happens tomorrow, how would the digital forensics investigation unfold? What preventative measures should be in place to avoid or limit the risk of such events?
When a user types a single prompt into a tool like ChatGPT, it appears as one action but on the back end, that request may trigger thousands of individual token calls to generate a response. If a security incident occurs, whether it's malicious activity or accidental data leakage, investigators cannot simply trace one query back to a database. Instead, they must sift through potentially thousands of fragmented interactions to determine the root cause, making the digital footprint exponentially harder to reconstruct.
Whether it's an actual attack or a breach of data, forensics become much more difficult when AI is involved, particularly if the AI is a cloud service.
To limit these risks, organisations should implement comprehensive logging that captures detailed token-level activity, maintain strict access controls to limit who can interact with AI systems, and deploy continuous monitoring tools designed to detect anomalous patterns in AI queries and outputs. While the threat is complex, these measures can help ensure that preparation makes the difference between containment and catastrophe.
There is also the question of force majeure - extraordinary events that are unforeseeable and beyond the control of the parties involved. The challenge from an underwriting perspective is how to qualify such events when AI is at the centre of the incident.
For more information, reach out to Tony Sabaj - Check Point Software: Leader in Cyber Security Solutions .
Did you get value from this story? Sign up to our free daily newsletters and get stories like this sent straight to your inbox.
Editor's picks
Editor's picks
More articles
Copyright © intelligentinsurer.com 2024 | Headless Content Management with Blaze
