The global insurance industry writes $7 trillion in premiums annually. AI is creating a new risk category that none of it covers — and enterprises are deploying AI anyway.
This is not a future problem. A hospital system in Ohio paid $11M to settle a claim after an AI diagnostic tool missed a malignant tumor that a physician, relying on its output, also missed. Their insurer denied coverage. The AI vendor's E&O policy had a software automation exclusion. Nobody paid — except the hospital. This scenario is playing out across industries right now, and the insurance market has no coherent answer.
The Coverage Gap Is Enormous
A 2025 Deloitte survey found that 85% of large enterprises are running AI systems in production. Fewer than 15% carry any AI-specific insurance coverage. The ones that believe they're covered by existing policies are almost certainly wrong — they just haven't had a claim tested in court yet.
Traditional policies break down predictably. Errors and omissions policies exclude "automated decision systems without meaningful human review" — language inserted years ago for rule-based software that now applies to LLMs. Cyber insurance covers data exfiltration and ransomware, not hallucinated outputs. Product liability requires a physical defect in a tangible good. AI outputs are intangible, produced dynamically, and not "products" in any legal framework written before 2020.
The EU AI Act, which came into force in 2024, requires conformity assessments and documentation for high-risk AI systems — and creates explicit liability pathways for damages caused by those systems. That regulation alone is forcing European insurers to price a risk they have no actuarial models for. The U.S. is 18 months behind, but heading to the same place.
New Risk Categories AI Creates
- •Hallucination liability — LLMs fabricating citations, contracts, or medical guidance that practitioners act on. Goldman Sachs estimated in 2025 that AI hallucination errors cost U.S. enterprises $4.7B annually in rework, legal exposure, and customer harm.
- •Algorithmic discrimination — AI hiring, lending, and pricing systems making decisions that violate the Equal Credit Opportunity Act, Fair Housing Act, or Title VII. The CFPB issued 14 AI discrimination enforcement actions in 2025 alone.
- •Autonomous decision errors — AI agents authorizing payments, executing contracts, or making clinical decisions without human sign-off. The liability chain is completely unclear: the operator, the model provider, or the user?
- •Model failure cascades — AI systems in supply chain, financial trading, or infrastructure control failing simultaneously because they rely on the same underlying model or data source. Correlated failure at scale.
- •Deepfake fraud — AI-generated voice and video used to impersonate executives and authorize wire transfers. The FBI reported $2.9B in losses from AI-enabled business email compromise in 2025, a 340% increase from 2023.
Who Is Building AI Insurance — and Who Will Win
The market is in its earliest formation. Munich Re launched a formal AI liability framework in 2024 and is writing pilot policies for healthcare AI with extremely narrow coverage and high premiums. Coalition has added AI liability endorsements to its cyber policies — mostly covering first-party AI-related business interruption. Cowbell is experimenting with AI risk scoring as part of underwriting. None of these are comprehensive; they are beachheads.
The startup activity is where this gets interesting. Armilla AI, a Canadian company, is building AI evaluation frameworks that output insurable risk scores — essentially becoming the underwriting data layer that traditional carriers need before they can write policies. Insured AI is working on dynamic coverage that adjusts as AI system behavior is monitored in real time. In my view, the company that builds the "Moody's for AI models' — a credible, independent risk rating — will capture the most durable value in this stack.
The structural opportunity is bigger than cyber insurance was in 2010. Cyber affected mostly tech and financial services. AI risk touches healthcare, legal, logistics, real estate, and any business using AI-powered decisions. The premium base, properly underwritten, would be larger than the current entire cyber insurance market within a decade.
What This Means for Founders and Investors
I've looked at seven AI insurance and AI risk management companies in the last six months. The ones that are interesting to me are not trying to be insurers — they are building the infrastructure that makes AI insurable. Risk scoring, continuous model monitoring, explainability auditing, and liability attribution tools. Carriers cannot write intelligent policies without that data layer, and right now it doesn't exist at any meaningful scale.
For founders building in AI: you need AI-specific insurance whether or not your carrier thinks you do. The gray area in your current E&O policy will not protect you when a claim arrives. Push your broker to get explicit AI output liability coverage in writing, or find a carrier who will write it. The absence of a clear policy today does not mean you have coverage — it means you're uninsured and don't know it yet.
Every major technology shift creates a matching insurance market. Cloud computing created cloud liability coverage. Autonomous vehicles created AV insurance. AI will create its own trillion-dollar coverage category — and the window to build the defining companies in that stack is open right now.
Stay current with VC and startup trends at Value Add VC. Originally published in the Trace Cohen newsletter.