FirmAdapt
FirmAdapt
LIVE DEMO
Back to Blog
AI complianceregulatoryfinancial servicesbankingcomplianceInsurance

Why Your Bank's Cyber Insurance Policy Probably Excludes AI Incidents (Read the Fine Print)

By Basel IsmailMay 8, 2026

Why Your Bank's Cyber Insurance Policy Probably Excludes AI Incidents (Read the Fine Print)

I spent last week reading through a stack of cyber insurance policy renewals from three major carriers, and I kept finding the same thing: new sublimits, new exclusions, and new definitional carve-outs that specifically target AI-related losses. If your bank deployed any AI tooling in the last 18 months and you haven't reviewed your cyber policy language since renewal, you might have a coverage gap big enough to drive a regulatory enforcement action through.

The Exclusion Language Is Already Here

Lloyd's of London issued Market Bulletin Y5381 in August 2023, requiring all Lloyd's syndicates to include explicit AI-related language in cyber policies starting January 2024. That bulletin didn't mandate exclusions per se, but it required underwriters to "clearly define" coverage positions on AI. In practice, most syndicates have responded by narrowing coverage.

What does this look like in actual policy language? A few patterns are emerging across carriers:

  • AI Decision Exclusions: Losses arising from "decisions made by, or recommendations generated by, artificial intelligence or machine learning systems" are excluded from errors and omissions coverage. This means if your AI-powered fraud detection system greenlights a fraudulent transaction, or your AI underwriting tool produces discriminatory lending outcomes, the resulting liability may not be covered.
  • Training Data Sublimits: Some policies now cap coverage for claims related to data used in model training. If your institution faces a claim under BIPA, CCPA, or state-level privacy statutes because training data included customer information, you might find a sublimit of $500K on a $10M policy.
  • Algorithmic Bias Carve-Outs: Fair lending claims arising from AI-driven credit decisions are increasingly treated as a "regulatory action" exclusion rather than a covered cyber event. This is a problem because CFPB enforcement around algorithmic bias has accelerated; the Bureau's September 2023 guidance on adverse action notices under ECOA specifically addresses AI and machine learning models.
  • "Technology Services" Redefinitions: Carriers are quietly amending the definition of "technology services" or "computer systems" to exclude or separately treat outputs from generative AI tools. If an employee uses a large language model to draft customer communications and something goes wrong, the claim might fall outside your policy's core coverage grant.

Why Underwriters Are Spooked

Underwriters price risk based on loss history and predictable exposure. AI breaks both of those inputs. There is essentially no actuarial data on AI-specific losses in financial services yet, and the potential severity is hard to model. A single algorithmic bias class action in lending could produce nine-figure exposure. The Consumer Financial Protection Bureau's $3.7 billion enforcement action against Wells Fargo in December 2022, while not AI-specific, gives you a sense of the scale regulators are comfortable pursuing when consumer harm is systemic.

The reinsurance market is also driving this. Munich Re and Swiss Re both published position papers in 2023 signaling that AI risk needs to be separately underwritten. When reinsurers get nervous, primary carriers respond by excluding or sublimiting the risk. This is the same pattern we saw with ransomware exclusions in 2020 and 2021, and with the war exclusion tightening after NotPetya.

What to Look for at Renewal

Pull your current policy and check these specific sections:

  • Definitions section: Look at how "computer system," "technology services," "data," and "wrongful act" are defined. If any of these have been amended to reference artificial intelligence, machine learning, or automated decision-making, flag it immediately.
  • Exclusions section: Search for "artificial intelligence," "algorithm," "machine learning," "automated," and "model." New exclusionary language sometimes appears as endorsements stapled to the back of the policy rather than in the base form.
  • Sublimits and retentions: Even where AI isn't excluded outright, carriers may impose higher retentions (effectively, higher deductibles) for AI-related claims. I've seen retentions jump from $250K to $1M for claims involving automated decision systems.
  • Regulatory proceedings coverage: Confirm whether CFPB, OCC, FDIC, or state AG investigations triggered by AI systems fall within your regulatory proceedings coverage. Some policies now require that the "wrongful act" giving rise to the proceeding be a human decision, not an algorithmic output.
  • Third-party vendor coverage: If you're using AI tools from a vendor (and most banks are), check whether your policy's "technology services" coverage extends to third-party AI platforms or whether there's a new vendor AI exclusion.

The Regulatory Feedback Loop

Here is where it gets interesting for compliance teams. Federal banking regulators are increasingly expecting institutions to manage AI risk as part of their enterprise risk management frameworks. The OCC, Fed, FDIC, NCUA, and CFPB jointly issued guidance on AI in financial services in March 2021 (SR 11-7 remains the foundational model risk management framework, but the 2021 RFI signaled expanded expectations). The NYDFS has been even more aggressive; its updated cybersecurity regulation (23 NYCRR 500), amended effective November 2023, requires covered entities to maintain cybersecurity insurance that is "adequate" for their risk profile.

If your AI deployments create risk that your cyber policy explicitly excludes, you have a gap between your regulatory obligation to manage risk and your actual risk transfer position. An examiner asking about your AI risk management program will eventually ask about insurance coverage. If the answer is "our policy excludes AI-related losses," that conversation gets uncomfortable fast.

The SEC's July 2023 proposed rule on predictive data analytics for broker-dealers and investment advisers (Release No. 34-97990) would, if finalized, create additional disclosure and conflict-of-interest obligations around AI. Even in its proposed form, it signals regulatory direction that insurers are watching closely.

Practical Steps Beyond Reading the Policy

First, get your broker involved early. Don't wait until 60 days before renewal. If you're deploying AI tools, your broker needs to understand the specific use cases so they can negotiate coverage rather than accept blanket exclusions. Some carriers will write back coverage for specific, well-documented AI applications where you can demonstrate governance controls, testing, and human oversight.

Second, document your AI governance framework thoroughly. Underwriters are more willing to cover AI risk when they can see a mature risk management program. This includes model validation, bias testing, audit trails, and incident response procedures specific to AI failures. The more you can show that your AI deployments are controlled and monitored, the better your negotiating position.

Third, consider whether you need a separate AI liability policy. The market for standalone AI coverage is nascent but growing. Coalition, At-Bay, and several specialty Lloyd's syndicates have begun offering endorsements or standalone products. Pricing is all over the place because the market is immature, but having the conversation now positions you better than scrambling after an incident.

How FirmAdapt Addresses This

FirmAdapt's architecture is built around auditability and human oversight by design, which directly addresses the governance documentation that underwriters want to see. Every AI interaction on the platform generates a complete audit trail, including inputs, outputs, model versions, and the human review steps applied before any decision or communication reaches a customer or regulator. When your broker or underwriter asks how you govern AI risk, you can hand them actual logs and policy enforcement records rather than a slide deck describing aspirational controls.

FirmAdapt also maintains compliance mappings to frameworks like SR 11-7, 23 NYCRR 500, and CFPB guidance on automated decision-making, so your AI usage stays within boundaries that regulators and insurers recognize. This kind of documented, continuous compliance posture is exactly what carriers are looking for when deciding whether to write back AI coverage or impose exclusions. It won't eliminate the insurance market's uncertainty about AI risk, but it materially strengthens your position at the negotiating table.

Ready to uncover operational inefficiencies and learn how to fix them with AI?
Try FirmAdapt free with 10 analysis credits. No credit card required.
Get Started Free