According to McKinsey's 2025 Global AI Survey, 78% of enterprises report using AI in at least one business function. But "using AI" includes a ChatGPT subscription someone bought on a corporate card.
Production AI — systems making consequential decisions at scale, embedded in core workflows, without a human triple-checking every output — is a far smaller number. Gartner estimated in late 2025 that fewer than 20% of enterprise AI pilots ever reach full production deployment. That gap has a name: the AI trust problem.
The Pilot Purgatory Is Real
I have talked to dozens of enterprise software founders over the last two years. Almost all of them have the same story: strong demo, enthusiastic champion, a pilot that goes well, then a six-month procurement cycle that eventually dies somewhere in legal or IT security. The technology works. The deal does not close.
78%
of enterprises report using AI in some function
McKinsey 2025
<20%
of AI pilots reach full production deployment
Gartner 2025
11 months
average enterprise AI procurement cycle
Salesforce State of AI 2025
What Enterprises Actually Fear
The objections founders hear — "data security concerns," "integration complexity," "need more stakeholder alignment" — are proxies for a deeper problem. Enterprises are not afraid of AI being wrong. They are afraid of being unable to explain, defend, or attribute an AI-driven outcome when it causes harm.
Legal liability without clear attribution
When an AI system denies a loan, misroutes a patient, or generates a compliance violation, someone has to own it. Most AI vendors offer no contractual indemnification for outputs.
Regulatory audits they can't pass
EU AI Act Article 13 requires high-risk AI to provide transparency documentation. HIPAA, SOX, and FCA rules demand audit trails. Black-box models fail these tests by default.
Shadow AI creating undocumented risk
Employees use consumer AI tools regardless of policy. Enterprises fear that uncontrolled AI use creates liability they will only discover after an incident.
Vendor lock-in on critical infrastructure
Embedding an AI system into core workflows means depending on that vendor's uptime, pricing decisions, and model updates forever. Procurement teams have seen what cloud lock-in costs.
The Governance Gap Is a Product Opportunity
The most interesting AI infrastructure investment thesis right now is not model performance — it is trust infrastructure. The companies building the audit layers, explainability frameworks, and compliance tooling that sit between AI outputs and enterprise workflows are solving a real, structural problem.
What Unlocks Enterprise Deployment
- ✓ SOC 2 Type II certification as table stakes
- ✓ Human-in-the-loop override at every critical decision
- ✓ Full audit logs of model inputs, outputs, and versions
- ✓ Contractual indemnification for AI-generated errors
- ✓ Data residency and sovereignty controls
- ✓ Explainability reports legal teams can present to regulators
What Keeps Deals in Pilot Purgatory
- ✕ "Our model is highly accurate" without accountability
- ✕ Shared multi-tenant infrastructure for regulated data
- ✕ No documented incident response process
- ✕ Liability entirely on the customer in the ToS
- ✕ Black-box outputs with no interpretability layer
- ✕ Pricing models that change post-deployment
The Regulated Industry Paradox
Healthcare, financial services, and insurance are the sectors with the most to gain from AI. They are also the sectors with the lowest deployment rates for anything beyond internal productivity tools. This is not a coincidence.
A hospital system that deploys an AI triage tool faces FDA clearance questions. A bank that uses AI for credit decisions faces Fair Lending Act scrutiny. An insurer that routes claims through an AI model faces state-level regulatory review. The legal surface area is enormous. A single AI-driven decision that causes documented harm can generate years of litigation and regulatory inquiry.
The founders building AI into these verticals who figure out the compliance architecture — not just the model performance — are building genuine moats. Clearance processes, regulatory relationships, and battle-tested audit frameworks are not replicable in six months by a competitor with better marketing.
What VCs Should Actually Be Asking
When I look at enterprise AI companies, I care less about the model accuracy benchmarks and more about the deployment infrastructure. Specifically:
- 1.How does the company handle a customer claim that an AI output caused harm? Is there contractual liability language, and who bears it?
- 2.What does the audit trail look like? Can a compliance officer reconstruct exactly why the model made a specific decision six months ago?
- 3.Is the trust architecture built into the core product or bolted on as an afterthought for enterprise procurement?
- 4.How has the company navigated its first regulatory inquiry or legal challenge involving an AI output?
- 5.What is the moat if a hyperscaler deploys the same model with better compliance infrastructure?
The AI companies that win in enterprise are not building the most impressive demos.
They are building the compliance architecture that allows a Fortune 500 general counsel to say yes — and that is a far harder, far more defensible problem to solve.
Track enterprise AI adoption and deployment trends on the AI Landscape Dashboard at Value Add VC. Originally published in the Trace Cohen newsletter.