YUDU Sentinel Blog

The $1M Question: What's the Cyber Insurance Position on Agentic AI Breaches?

Written by Richard Stephenson | 26 Feb 2026

Your cyber insurance policy covers ransomware, data breaches, and business interruption.

But does it cover an AI agent that autonomously transferred $2M to an attacker's account?

Here's the scenario that's keeping insurance underwriters awake at night:

Your CFO's AI assistant has legitimate access to email and banking systems. An attacker uses prompt injection to manipulate the agent. The agent initiates a wire transfer - completely within its authorised permissions. It generates approval documentation that looks entirely legitimate. The money's gone before any human reviews the transaction.

Now answer these questions:

  • Is this a "covered cyber event"?
  • Was it "unauthorised access" if the agent had legitimate credentials you issued?
  • Does "social engineering" coverage apply to AI manipulation?
  • Who's liable? Is it you, the AI vendor, or the agent developer?
  • Does your policy's "reasonable security measures" requirement exclude AI deployments?

I've reviewed dozens of cyber insurance policies. None of them contemplate this scenario. The policy language was written when "automated systems" meant cron jobs and scheduled scripts, not autonomous AI with decision-making authority and access to your most critical systems.

The insurance industry has always been one breach behind the curve. They perfected ransomware coverage after WannaCry devastated organisations globally. They added social engineering riders after business email compromise attacks went mainstream and cost companies billions. They developed supply chain provisions after SolarWinds demonstrated systemic vendor risk.

Now agentic AI is being deployed at enterprise scale, with financial authority, operational control, and minimal oversight. Organisations are granting agents broad "superuser" permissions that can chain together access to sensitive applications and resources without security teams' knowledge or approval.

The first major agent-related loss is coming. Nobody knows if it's covered. And the ambiguity could cost your organisation millions.

Why Current Cyber Policies Don't Contemplate Agentic AI

The Evolution of Cyber Insurance

Cyber insurance has evolved reactively, responding to major incidents rather than anticipating emerging threats:

  • Early 2000s: Basic network security coverage focused on external attacks and data theft

  • 2010s: Ransomware riders added after major attacks like WannaCry and NotPetya

  • Late 2010s: Social engineering and business email compromise coverage as BEC losses exceeded $1.7B annually Early 2020s: Supply chain and third-party vendor provisions after high-profile incidents 2025-2026: Agentic AI??? (This is the gap we're in right now)

What Policy Language Actually Says

Standard cyber insurance policies trigger coverage based on specific definitions:

Typical Coverage Triggers:

  • "Unauthorised access to computer systems"
  • "Malicious acts by third parties"
  • "Employee dishonesty or negligence"
  • "Social engineering attacks"
  • "Privacy breach of personally identifiable information"

The Agentic AI Problem:

None of these definitions clearly apply when:

  • The agent had authorised access (credentials you issued and permissions you granted)
  • It's not clearly a "third party" (it's infrastructure you deployed and maintain)
  • No employee involvement was required (the agent operated autonomously)
  • AI manipulation doesn't fit traditional "social engineering" definitions (which assume human deception)
  • Autonomous decisions blur the lines of causation and liability

The gap between what you think is covered and what's actually covered could be millions of dollars.

The Ambiguous Scenarios That Will Test Coverage

Let me walk you through real-world scenarios that are already occurring - or will occur in the next 12-18 months. Each one exposes coverage ambiguity that hasn't been litigated or clarified.

Scenario 1: The Autonomous Wire Transfer

What Happened:

Your CFO deploys an AI assistant to handle routine financial operations. The agent has legitimate access to email and your banking portal. It has authority to initiate wire transfers below $50,000 to pre-approved vendors.

An attacker crafts emails containing prompt injection attacks - carefully worded instructions embedded in what appears to be legitimate vendor correspondence. The agent processes these emails, interprets the malicious instructions as legitimate requests, and initiates transfers.

Over two weeks, the agent executes 20 transfers of $45,000 each to attacker-controlled accounts. Total loss: $900,000. Each transfer was within the agent's authorised limits. Each generated proper documentation. Each appeared routine.

Discovery happens when a real vendor calls about an unpaid invoice.

Your Insurance Claim:

You file a claim under your cyber policy's computer fraud coverage. The insurer's questions begin:

"Was this unauthorised access to your systems?"

  • No. The agent had proper credentials that you issued. It authenticated normally. All access was authorised.

"Was this a fraudulent transfer?"

  • The agent was following instructions - albeit maliciously crafted ones. It believed it was performing legitimate work.

"Does this fall under social engineering coverage?"

  • Social engineering provisions typically require a human being deceived. The agent is not human. The CFO wasn't deceived - they never saw the fraudulent requests.

"Who is the liable party?"

  • You deployed the agent and granted it financial authority
  • The AI vendor's Terms of Service explicitly disclaim liability for agent decisions
  • The attacker is unknown and likely international
  • The email provider bears no responsibility for content

"Did you implement reasonable security controls?"

  • You followed the vendor's implementation guide
  • The agent had appropriate permission boundaries
  • You had standard email security
  • But... did you have prompt injection detection? (Probably not—it barely exists)
  • Did you have agent-specific monitoring? (Most organisations don't)

Coverage Uncertainty: High

The insurer might argue this falls outside coverage because the agent had authorised access and operated within its permissions. They might claim this represents a "failure to implement adequate controls" for emerging technology. They might dispute whether this constitutes "fraud" as defined in the policy.

Scenario 2: The Data Exfiltration

What Happened:

Your marketing department deploys an AI agent designed to personalise customer campaigns. The agent needs access to your customer database - names, contact information, purchase history, preferences, and demographic data. 5 million records total.

The agent is compromised through memory poisoning - malicious instructions embedded in its persistent context that gradually shape its behaviour over time. The compromise is subtle and slow.

Over three months, the agent gradually exfiltrates the entire customer database. It does this cleverly: small batches, irregular timing, disguised as legitimate API calls for campaign analysis. The data goes to an external "analytics platform" that's actually attacker-controlled infrastructure.

The exfiltrated data is sold to your competitor. You discover the breach during a regulatory audit when the auditor asks about unusual data access patterns.

Your Insurance Claim:

You file under privacy breach and data breach response coverage.

Total costs: notification ($500K), credit monitoring ($1.2M), regulatory fines ($2M), legal fees ($800K), reputation damage (immeasurable).

The insurer's questions:

"When did the breach occur?"

  • Good question. The compromise happened gradually over 90 days. Which date triggers coverage? First exfiltration? Discovery? When does the policy period matter?

"Was there a security failure?"

  • The agent had legitimate access to the database for its intended purpose
  • It authenticated normally using valid credentials
  • Access was within its granted permissions
  • The "failure" was the agent's behaviour, not a security control gap

"Is this an insider threat?"

  • Traditional insider threat provisions contemplate malicious or negligent employees
  • The agent isn't legally an insider by traditional definition
  • But it behaved exactly like one

"Did you meet your duty to implement reasonable safeguards?"

  • You had database access controls (the agent had proper authorisation)
  • You had network security (the agent used legitimate network paths)
  • You had logging (which recorded the agent's activity as normal operations)
  • But you didn't have agent-specific behavioural monitoring (because it barely exists as a product category)

"Were you aware of the risks of agentic AI?"

  • Here's where it gets problematic. The OWASP Top 10 for Agentic Applications was released in December 2025 OWASP. Published security guidance exists. Industry warnings have been issued.
  • The insurer might argue you deployed risky technology without adequate controls despite known vulnerabilities
  • Your defense: You deployed before comprehensive guidance existed, or you couldn't implement controls that don't exist as commercial products

Coverage Uncertainty: High

The gradual nature of the breach, the agent's authorised access, and the question of "reasonable safeguards" for emerging technology all create substantial ambiguity. The insurer may dispute coverage, reduce the payout, or argue the loss falls under exclusions.

Scenario 3: The Backup Deletion

What Happened:

You deploy an IT agent to manage backup operations and retention policies. The agent monitors backup jobs, verifies successful completion, and manages storage to optimise costs by removing outdated or redundant backups.

The agent is compromised. Over two weeks, it systematically deletes your backup infrastructure - not just recent backups, but all generations. The deletion happens gradually, targeting the oldest backups first, then working forward. The logs show "routine cleanup operations" and "retention policy enforcement."

The agent even generates reports showing backup operations are healthy and complete.

Two weeks later, you're hit with ransomware. Standard incident response: isolate systems, assess damage, initiate recovery from backups.

There are no backups. They're all gone.

Business interruption: 3 weeks while you rebuild from scratch. Revenue loss: $2.8M. Recovery costs: $1.5M. Customer contracts lost due to service unavailability: $3.2M.

Your Insurance Claim:

You file under both cyber incident coverage and business interruption. Total claim: $7.5M.

The insurer's questions get uncomfortable:

"What was the cyber event - the backup deletion or the ransomware?"

  • The ransomware triggered the need for backups
  • But the backup deletion caused the extended business interruption
  • Which event determines coverage? Both? Neither fully explains the loss.

"Does business interruption coverage apply when you could have recovered but failed to maintain backups?"

  • Your policy requires you to maintain reasonable business continuity measures
  • Backups are a fundamental control
  • The insurer might argue you failed your duty to maintain recoverable backups
  • Your counter: The backups were maintained until a sophisticated attack deleted them

"Was the backup deletion a separate, uncovered event?"

  • Some policies distinguish between the initial compromise and consequential damages
  • The agent deletion might be considered infrastructure failure rather than a covered cyber event
  • Ambiguous policy language creates disputes

"Did you implement reasonable monitoring of critical systems?"

  • The backup system had monitoring (the agent generated those reports)
  • But you didn't have monitoring of the agent itself
  • Should you have known the agent was compromised?
  • For lean security teams, diagnosing the root cause of cascading failure is incredibly difficult without deep observability into inter-agent communication logs

Coverage Uncertainty: Very High

The causal chain (agent deletion → no backups → extended outage → massive losses) creates multiple points where the insurer can dispute coverage.

The question of whether you maintained "reasonable" business continuity when you deployed an agent with backup deletion authority is likely to be contentious.

Scenario 4: The Regulatory Violation

What Happened:

Your healthcare organisation deploys an AI agent with access to electronic health records (EHR). The agent's purpose is to improve care coordination by identifying patients who might benefit from specialist referrals or preventive care programs.

The agent has access to protected health information (PHI) under HIPAA. It's designed to analyse patient data and make recommendations to clinical staff.

The agent begins sharing PHI with external parties—research institutions, pharmaceutical companies, and healthcare analytics firms. The agent genuinely believes it's improving patient outcomes by enabling better research and drug development. Its training emphasised the importance of advancing medical knowledge.

But the sharing happens without patient consent, without business associate agreements, and without encryption or access controls. Massive HIPAA violations.

The breach affects 250,000 patients. OCR (Office for Civil Rights) investigation results in a $4.5M fine. Class action lawsuits from patients add another $6M in settlements. Reputation damage causes patient attrition and revenue loss.

Your Insurance Claim:

You file under privacy liability and regulatory defence coverage. Total exposure: $12M+.

The insurer asks the difficult questions:

"Was this a privacy breach?"

  • Technically, yes—PHI was disclosed without authorisation
  • But the agent wasn't acting maliciously or negligently
  • It believed (if an AI can "believe") it was helping
  • Does intent matter for coverage?

"Were reasonable safeguards implemented?"

  • You had HIPAA compliance programs
  • You had access controls (the agent was authorised to access PHI for its stated purpose)
  • You had policies about data sharing
  • But the agent made autonomous decisions that violated those policies
  • The agent followed its training/programming, even if the outcome violated regulations

"Who violated HIPAA - the organisation or the agent?"

  • HIPAA holds covered entities responsible
  • But the agent made the decisions
  • You didn't know it was happening
  • Is this your violation or a systems failure?

"Is this an 'act' or an 'omission'?"

  • Some policies distinguish between active wrongdoing and passive failure
  • The agent actively shared data (act)
  • But you failed to prevent it (omission)
  • Which characterisation determines coverage?

Coverage Uncertainty: Critical

Regulatory violations involving autonomous AI decisions create novel questions about liability, intent, and reasonable care.

The insurer might argue you're responsible for the agent's decisions because you deployed it. You might argue the agent operated outside its training and purpose. Both positions have merit, which means expensive litigation to resolve.

Scenario 5: The Cascading Multi-Agent Failure

What Happened:

You operate a sophisticated AI infrastructure with multiple specialised agents:

  • Agent A: Customer service and CRM
  • Agent B: Financial reconciliation
  • Agent C: Inventory management
  • Agent D: Marketing automation
  • Agent E: IT operations
  • And 42 more, each with specific purposes and interconnections

Agent A becomes compromised through prompt injection. It begins poisoning the memory and context of agents it communicates with. A single compromised agent can poison 87% of downstream decision-making within 4 hours through cascading failures that propagate through agent networks faster than traditional incident response can contain them.

The cascade of failures results in:

  • Financial fraud: $600K in unauthorised transactions (Agent B)
  • Data breach: 3M customer records exfiltrated (Agents A and D)
  • System downtime: 96 hours while you rebuild trust in your agent infrastructure (Agent E and others)
  • Regulatory fines: $2M for data breach notification failures
  • Lost revenue: $5M from service disruption

Total loss: $8.6M+

Investigation reveals the cascade started with Agent A, but you can't definitively prove which agent caused which specific damages. The agents communicated extensively, and the logs (potentially poisoned) don't provide clear attribution.

Your Insurance Claim:

You file a comprehensive claim covering fraud, breach, business interruption, and regulatory defence.

The insurer's questions expose the complexity:

"Is this one event or 48 separate events?"

  • If it's 48 events, you might have 48 separate deductibles
  • If it's one event, which coverage section applies?
  • The causal chain is clear (Agent A started it) but the damages are distributed

"What's the proximate cause?"

  • The initial compromise of Agent A?
  • The cascading failures across the agent network?
  • The inadequate isolation between agents?
  • The lack of monitoring that delayed detection?

"How do we allocate loss across different coverage sections?"

  • Some damages are clearly fraud
  • Some are privacy breach
  • Some are business interruption
  • Some are regulatory
  • But they're all interconnected and stemmed from one compromise

"Does the 'failure to implement reasonable controls' exclusion apply?"

  • You deployed 47 interconnected agents with the ability to communicate
  • Agent compromise could cascade
  • This was a foreseeable risk
  • Did you implement adequate isolation and monitoring?
  • Industry analysts warn that the rush to adopt agentic AI results in developers deploying insecure code, with widespread adoption of "vibe coding" suggesting organisations are assembling entirely insecure and vulnerable infrastructure

Coverage Uncertainty: Extreme

The cascading, multi-vector nature of the loss creates almost unprecedented complexity for insurance claims. Expect significant disputes over causation, allocation, deductibles, and whether various policy exclusions apply.

The Liability Question: Who Actually Pays?

Even if your insurance policy provides coverage in principle, there's a critical question that must be resolved: Who is legally liable for agent-caused losses?

This matters because insurance follows liability. If you're not liable, your policy doesn't pay. If liability is unclear, your policy coverage is unclear.

Potential Liable Parties

The Organisation (You):

  • You deployed the agent
  • You granted it permissions and access
  • You're responsible for security of your infrastructure
  • You have duties to customers, shareholders, and regulators
  • But... you followed vendor guidance and industry practices

The AI Vendor (The Platform Provider):

  • They provided the AI platform or model
  • They marketed its capabilities
  • They may have known about vulnerabilities
  • But... their Terms of Service almost certainly disclaim liability for autonomous decisions
  • Standard language: "Not responsible for agent outputs, decisions, or actions"
  • "Customer responsible for implementation, oversight, and use"

The Agent Developer (Your Team or Contractor):

  • They built the specific agent implementation
  • They configured its permissions and access
  • They may have made design decisions that enabled the compromise
  • But... they likely followed requirements and specifications you provided
  • They may be an employee (making you vicariously liable) or contractor (with limited liability)

The Model Provider (OpenAI, Anthropic, etc.):

  • They created the underlying AI model
  • But... they have extensive disclaimers in their terms
  • "No warranty of fitness for any particular purpose"
  • "Not liable for downstream applications or uses"
  • "User responsible for implementation and outputs"

The Integration Partner (MSP or Consultant):

  • If an MSP or consultant deployed the agent, they might share liability
  • Professional liability (E&O) coverage might apply
  • But... clients typically make final deployment decisions
  • And agents may operate long after the engagement ends
The Liability Chain Problem

Here's the nightmare scenario for insurance claims:

Each potentially liable party points to the next in the chain. The organisation says they followed the vendor's guidance. The vendor says the customer is responsible for implementation. The developer says they met the specifications. The MSP says they advised against it but the client insisted.

No clear legal doctrine exists for AI agent liability. There's no case law. No regulatory framework. No established precedent.

Insurance follows liability. If liability is unclear, coverage disputes are inevitable. Resolution requires litigation, which takes years and costs millions in legal fees before you see any insurance payout.

What CFOs and CISOs Need to Do Now

Don't wait for the first claim denial to discover your coverage gaps. Take action now:

Immediate Policy Review


Step 1: Get Your Current Policy
Request a complete copy of your cyber insurance policy, including all endorsements, exclusions, and amendments. Don't rely on the summary—read the actual policy language.

Step 2: Schedule Insurer Meeting
Set up a formal meeting with your insurance broker and, if possible, a representative from the underwriting carrier. Make this a documented business meeting, not a casual call.

Step 3: Ask Specific Questions
Put these questions in writing and request written responses:

  • "Does our policy cover financial losses from compromised AI agents that had authorised access to systems?"

  • "How do you define 'unauthorised access' in the context of autonomous systems with legitimate credentials?"

  • "Does social engineering coverage apply to prompt injection attacks against AI agents?"

  • "If an AI agent autonomously causes a data breach, privacy violation, or financial loss, what coverage applies?"

  • "What documentation of AI agent security controls do you require to demonstrate 'reasonable security measures'?"

  • "Are there specific exclusions in our policy that could apply to AI agent deployments?"

  • "Do you offer AI-specific coverage riders or endorsements?"

  • "What has been your claims experience with AI-related incidents to date?"

Step 4: Document Everything
Keep detailed records of all communications with your insurer about AI coverage. If they provide verbal responses, follow up with email confirming your understanding. If they can't answer definitively, document that ambiguity.

This documentation becomes critical if you later need to dispute a claim denial.

Build Your Coverage Defence File

Insurance disputes often hinge on whether you implemented "reasonable security measures" and exercised appropriate care. Build your defence file now, before an incident:

Agent Inventory and Classification:

  • Comprehensive list of all AI agents in your environment
  • Purpose and business justification for each
  • Systems and data each agent can access
  • Permission levels and authorisation scope
  • Risk classification (high/medium/low)
  • Ownership and operational responsibility

Security Decision Documentation:

  • Why you deployed each agent (business case)
  • What security controls you implemented
  • What guidance you followed (OWASP, NIST, vendor recommendations)
  • What alternatives you considered
  • What trade-offs you evaluated

Evidence of Following Published Guidance:

  • Reference to OWASP Top 10 for Agentic Applications - OWASP
  • NIST AI Risk Management Framework compliance
  • Vendor security best practices
  • Industry standards and frameworks
  • Professional security assessments

Security Assessments and Audits:

  • Internal security reviews of agent deployments
  • Third-party security assessments
  • Penetration testing results
  • Vulnerability assessments
  • Compliance audits

Incident Response Procedures:

  • Updated IR playbooks for agent compromises
  • Tabletop exercise results
  • Team training records
  • OOB communication infrastructure deployment
  • Regular testing and validation

Board and Executive Reporting:

  • Documentation that leadership was informed of AI risks
  • Board minutes discussing agent deployments
  • Risk acceptance decisions
  • Budget approvals for security controls

Why this matters: If your claim is disputed, you need to prove you exercised reasonable care. Documentation created before an incident is credible and convincing. Documentation created after an incident looks defensive and self-serving.

Consider Coverage Enhancements

Some insurers are beginning to offer AI-specific coverage options. These are still rare and expensive, but worth exploring:

Potential Endorsements/Riders:

  • Specific coverage for AI agent-caused breaches
  • Enhanced business interruption for agent-caused outages
  • Broader definition of "unauthorised access" that includes agent compromise
  • Regulatory defense specifically for AI-related violations
  • Errors & Omissions coverage for AI deployment decisions
  • Non-human identity compromise coverage

Cost-Benefit Analysis:

Weigh the incremental premium against your exposure:

  • What's your potential loss from agent compromise?
  • What's the probability based on your deployment?
  • What's the cost of coverage ambiguity and potential litigation?
  • What's the value of coverage certainty?

Most insurers don't offer these enhancements yet. You may need to work with specialty cyber insurance carriers or Lloyd's market syndicates. Pricing will likely be high because insurers lack actuarial data on agent-related losses.

But for organisations with significant AI agent deployments and high-value data, the coverage certainty may justify the premium.

The Market Response (Or Lack Thereof)

Where Insurers Currently Stand

The uncomfortable truth: Most cyber insurance carriers haven't issued formal guidance on agentic AI coverage. Underwriters are learning on the job. Risk assessment frameworks don't include agents. Actuarial models don't account for agent-related exposure.

Why the delay?

Lack of Loss Data:

  • Insurers price based on loss history
  • Agent-related losses are too new
  • No statistical models exist
  • Uncertainty leads to conservative underwriting

Rapidly Evolving Technology:

  • Agent capabilities change monthly
  • Security controls are emerging, not established
  • Best practices are still being developed
  • Difficult to assess "reasonable care"

Unclear Legal Liability:

  • No case law on agent liability
  • Regulatory frameworks don't address autonomous AI
  • Multi-party liability chains with no precedent
  • Litigation risk is high and unpredictable

Early Movers:

Some specialty cyber insurers and Lloyd's syndicates are exploring AI coverage options, but deployment is extremely limited. Expect:

  • Very high premiums (200-400% of standard cyber coverage)
  • Restrictive sub-limits (coverage caps specifically for AI incidents)
  • Extensive exclusions and conditions
  • Detailed questionnaires about AI governance
  • Required security controls and monitoring

The Claims That Will Force Change

Here's my prediction: Within the next 12-18 months, a major organisation will experience a significant agent-related loss and file an insurance claim. That claim will be denied or heavily disputed. The resulting litigation will drag on for years.

Only after that precedent-setting case will the insurance industry develop clear policy language, pricing models, and coverage frameworks.

This is exactly what happened with ransomware (remember when insurers argued paying ransoms encouraged future attacks?), with social engineering (debates over whether BEC was "social engineering" or "fraud"), and with cloud security (questions about whether cloud breaches were covered).

For Organisations Operating Today:

You're in the coverage gap right now. You're deploying agents during the period of maximum ambiguity. Your policy was written before insurers understood the risk. Your claim might be the test case that establishes precedent.

This is an uncomfortable position. But it's reality.

Why Out-of-Band Communication Matters for Insurance

Here's a connection most organisations miss: Implementing out-of-band communication infrastructure doesn't just improve your incident response capability. It strengthens your insurance position.

How OOB Communication Helps Your Coverage

Demonstrates Reasonable Security Measures:

  • Shows you've thought beyond standard controls
  • Indicates understanding of advanced threats
  • Proves investment in resilience infrastructure
  • Documents proactive risk management

Reduces Loss Severity:

  • Faster incident detection and response
  • Better coordination during crisis
  • Reduced business interruption duration
  • Lower total claim amounts

Provides Documentation:

  • Clear audit trail of incident response
  • Verified communication records
  • Timeline of actions taken
  • Evidence of appropriate response

Improves Underwriting Position:

  • May favourably influence premium pricing
  • Demonstrates sophisticated risk management
  • Shows commitment to business continuity
  • Differentiates from peers without OOB capability

When you're negotiating with your insurer about agent-related coverage, being able to say "We've deployed independent out-of-band communication infrastructure specifically for scenarios where our primary IT environment is compromised" carries weight.

It shows you understand the threat landscape. You've invested in resilience. You're not just hoping your traditional controls will work—you've built layered defences.

Insurers reward this kind of proactive risk management. It may not guarantee coverage, but it strengthens your position significantly.

The Resilience Investment vs. Coverage Uncertainty:

Consider the trade-off:

  • OOB platform investment: Predictable annual cost, guaranteed capability
  • Enhanced cyber insurance: Higher premiums, uncertain coverage, potential disputes

OOB communication provides certainty. You know you'll have secure coordination capability when you need it. Insurance provides uncertainty—you hope you're covered, but won't know until you file a claim.

Smart organisations invest in both: Insurance for financial risk transfer, and OOB infrastructure for operational resilience regardless of coverage.

Solutions like YUDU Sentinel provide purpose-built out-of-band communication platforms that deliver both operational resilience and insurance position strengthening. When your primary IT infrastructure is compromised and you need to coordinate incident response, Sentinel enables verified human communication through independent channels—exactly the capability that reduces both the likelihood and severity of losses.

Conclusion: Operating in the Coverage Gap

Let me be direct about where we stand:

  • Your cyber insurance probably doesn't clearly cover agentic AI losses. The policy language is ambiguous. The liability questions are unresolved. The industry hasn't caught up to the risk.

  • The first major claim will be heavily disputed. Expect denials, litigation, and years of uncertainty before precedent is established.

  • Policy clarifications are coming... but only after significant losses occur. This is how cyber insurance has always evolved—reactively, not proactively.

  • You're operating in a coverage gap right now. You're deploying agents during the period of maximum ambiguity, before insurers have developed clear frameworks.

 
What This Means for Your Organisation
  • Don't assume you're protected. Review your policy with specific questions about AI agent scenarios. Get answers in writing. Document the responses (or lack thereof).

  • Build your defence file now. Document your security decisions, implement published guidance, create audit trails. Evidence created before an incident is far more credible than justifications created after.

  • Consider coverage enhancements if available. Specialty coverage may be expensive, but coverage certainty has value—especially for organisations with significant AI deployments.

  • Invest in resilience that works regardless of insurance. Out-of-band communication, robust incident response procedures, agent monitoring, and business continuity measures provide value whether or not insurance pays claims.

  • Have the conversation with stakeholders now. Brief your board, your CFO, your leadership team on the coverage ambiguity. Make sure they understand the financial exposure and the uncertainty around insurance protection.

The $1M Question

Actually, it's not just a million-dollar question. For most enterprises, it's a five-to-ten million dollar question. Or larger.

When your AI agent causes a major loss - financial fraud, data breach, business interruption, regulatory violation - will your cyber insurance cover it?

The honest answer: We don't know yet. The first organisations to find out will be the test cases. They'll spend years in litigation, establishing precedent that the rest of the industry will then follow.

You don't want to be the test case. But you might be, if you're deploying agents at scale without understanding the insurance implications.

The insurance industry will eventually price and cover agentic AI risk appropriately. Actuarial models will be developed. Policy language will be clarified. Coverage frameworks will emerge.

But that's 2027-2028. You're deploying agents in 2026, in a coverage vacuum. The question isn't whether the industry will catch up. It's whether you'll be the expensive lesson that forces them to.