πŸ“š Chapter 7Part II: Where Value Accrues in the AI Era

Why Vertical AI Wins

The compounding logic of domain specificity β€” and why breadth is a trap.

TC
Trace Cohen
3x founder Β· 65+ investments Β· Author, The Value Add VC

Key Insight

When foundation model access is available to everyone, differentiation migrates from the model to everything around it: domain knowledge, workflow integration, accumulated data, compliance infrastructure. Vertical AI companies compound these advantages over time in ways horizontal tools cannot. Every processed claim, dispatched truck, or underwritten policy trains the system in ways a new entrant can't replicate without years of equivalent volume.

3 yrs
Domain-specific training data that creates a real moat
5 layers
Vertical AI moat stack depth
Nobody
Switches when workflow is fully embedded
120%
NRR achievable for well-embedded vertical AI

The Equal Access Problem

Foundation model access is becoming available to everyone. Any team with $20/month and a credit card can call GPT-4. The same model that powers enterprise applications is available to any developer on the planet. When everyone can access equivalent model capability, differentiation migrates away from the model and toward everything around it.

That "everything around it" is where vertical AI wins. Domain knowledge that took years to accumulate. Workflow integration that required hundreds of customer implementation hours. Accumulated training data from real deployment at scale. Compliance infrastructure that took 18 months to build. Switching cost that accumulates with every week of use.

The Compounding Mechanics

Vertical AI companies compound in three distinct ways that horizontal tools cannot replicate.

Data flywheel: Every transaction processed generates labeled outcomes that improve the model for the next transaction. A healthcare revenue cycle AI that processes 10 million claims has training data that a new entrant can't acquire by reading documentation or scraping the web. The only way to get it is to process the claims β€” which requires customers β€” which requires the data to exist first. This circular advantage is self-reinforcing.

Workflow deepening: Every integration point adds switching cost. A product that sits inside the EHR, the billing platform, and the payer submission portal isn't one product anymore β€” it's three integrations that would all need to be rebuilt with a replacement. Buyers rationally avoid that disruption.

Account expansion: A company that starts with underwriting can expand into claims, fraud detection, and pricing. Each new module increases total switching cost while leveraging existing data and relationships. NRR above 120% becomes achievable when expansion is natural and structurally enabled.

From the Book

β€œThe first vertical AI investment that really worked for me had nothing particularly impressive about the model underneath it. What it had was three years of domain-specific training data from real deployment, a team from inside the industry they were serving, and integrations so deep that the largest customer told me they'd rewritten their entire workflow around the product. That's a moat. The model is just the engine.”

β€” Trace Cohen, The Value Add VC

The Breadth Trap

Horizontal AI tools face a strategic contradiction. To grow, they need to serve more users in more industries. But serving more users in more industries requires generality β€” the ability to work well enough for everyone β€” which conflicts with the depth that creates defensibility. The result is a product that's good enough for many use cases but exceptional for none.

Vertical AI companies avoid this trap by design. They choose a beachhead β€” one industry, one workflow, one buyer type β€” and go impossibly deep. Once the beachhead is owned, they expand from adjacent workflows and use cases within the same buyer relationship, not by adding new verticals.

What This Means in Practice

When evaluating a vertical AI company, the core question is: does the team understand this industry at the level required to build something that an insider would trust? Not "did they read about it." Not "do they have an advisor from the industry." Do they have the deep domain expertise to make product decisions that a generalist team would get wrong?

The model isn't the moat. The moat is what you build around it while everyone else is staring at it.

Frequently Asked Questions

What makes vertical AI more defensible than horizontal AI?+
Vertical AI compounds in ways horizontal tools cannot. Proprietary data becomes more valuable over time β€” every processed claim, dispatched truck, or completed transaction trains the model in ways new entrants can't replicate without years of volume. Workflow integration deepens as the product becomes embedded in existing systems. Expansion within existing accounts increases total switching cost. Horizontal tools compete on model quality, which commoditizes.
What industries are best suited for vertical AI?+
Industries with regulated data, complex workflows, and slow-moving incumbents are ideal for vertical AI: healthcare (clinical documentation, revenue cycle, prior authorization), legal (contract review, discovery, compliance), insurance (underwriting, claims, fraud detection), logistics (dispatch, routing, demand forecasting), and government/defense (procurement, intelligence analysis). These sectors have enough complexity and switching cost to justify deep investment, and enough data to create genuine proprietary advantage.
Can horizontal AI tools be defensible?+
Some horizontal tools can build defensibility through network effects (tools where more users creates more value for each user), proprietary distribution (Microsoft's Office 365 integration), or brand lock-in in specific buyer segments. But the base rate for horizontal AI defensibility is low. Most horizontal tools are competing on model quality and interface, both of which are replicable. The ceiling for defensibility in horizontal AI is lower than in vertical.
How do you know when a vertical AI company actually has proprietary data?+
The key test: could a competitor with API access to the same model replicate the product's performance in 12 months? If yes, the data isn't actually proprietary β€” the model is doing the work. If no β€” if years of labeled domain-specific outcomes are embedded in fine-tuned models or retrieval systems β€” then the data is the moat. Ask founders: 'What happens to your product's performance if we replace the model underneath it with an open-source equivalent?' The answer reveals whether the value is in the model or the data.
πŸ“š

Read the Full Book

22 chapters on how venture capital actually works β€” the math, the mechanics, and the decisions that compound over time.