📚 Chapter 6Part II: Where Value Accrues in the AI Era

The AI Stack: Where Smart Money Actually Goes

Models commoditize. Context compounds. A map of the ecosystem and why the obvious bets are often the worst ones.

TC
Trace Cohen
3x founder · 65+ investments · Author, The Value Add VC

Key Insight

The AI stack runs from infrastructure (compute, training) → foundation models → orchestration/tooling → applications. Value migrates upward over time. Foundation model companies face structural challenges: training costs require sustained capital, competition is intense among well-funded players, and pricing pressure accelerates as compute costs fall. For emerging managers, the application layer — especially vertical AI with deep workflow integration — offers the best risk-return profile.

App Layer
Where emerging managers should concentrate
4 layers
Infrastructure → Models → Tooling → Apps
Upward
Direction value migrates over time
$0
Sustainable foundation model pricing (eventually)

When Capital Flooded In

When AI became the dominant venture theme, money flooded in at every layer of the stack. Foundation model companies raised billions. Infrastructure tooling raised hundreds of millions. Orchestration frameworks, vector databases, fine-tuning platforms: everything touched by AI attracted capital at extraordinary speed.

Most of that capital will not generate the returns investors expected. Not because AI isn't transformative — it clearly is — but because investors confused “this technology is important” with “this company is defensible.” Those are very different claims.

The Foundation Model Trap

Foundation model companies are remarkable technical achievements. GPT-4, Claude, Gemini — these are genuinely transformative systems. As venture investments, however, they carry a structural problem that no amount of technical achievement resolves.

Training costs require sustained capital that most venture funds can't support at scale. A foundation model that costs $100M to train today will cost $500M to train the next generation. The competitors are not startups — they are Google, Microsoft, Meta, and Amazon, each with virtually unlimited compute budgets. Pricing pressure is relentless as open-source alternatives improve. The companies building applications on top of the models often capture more durable value than the model providers themselves.

Where the Stack Actually Matters

The AI stack runs roughly from bottom to top: compute infrastructure, foundation models, orchestration and tooling, and application layer. Value migrates upward over time. Infrastructure commoditizes as cloud providers compete on price. Foundation models commoditize as open-source catches up to proprietary. The application layer — where context, workflow, and switching cost accumulate — is where enduring value lives.

For emerging managers, the risk-return profile makes the most sense at the application layer. Application companies that embed deeply into specific vertical workflows accumulate domain context, proprietary data, and switching cost that create genuinely durable businesses. The foundation model underneath them may change. The orchestration layer may be replaced. But the workflow that's been rebuilt around their product, the compliance infrastructure in their deployment, and the years of domain-specific training data they've accumulated cannot easily be replicated by a competitor with access to the same underlying model.

The Key Principle

Models commoditize. Context compounds. Invest accordingly.

The Convergence with Emerging Manager Math

This is where the AI investment thesis converges with the emerging manager math from Part I. A vertical AI company with embedded workflows, proprietary data, and high switching cost doesn't need a $5B exit to move a $75M fund. It needs to be genuinely hard to displace — and that kind of durability starts at the application layer, not the infrastructure layer.

The AI categories with the highest defensibility and the lowest competitive intensity — the upper-left quadrant — are vertical application companies serving industries where data is regulated, workflows are complex, and incumbents are slow to change. Healthcare, legal, insurance, logistics, defense. These are the categories where the application layer moat compounds fastest.

The AI categories with the lowest defensibility and the highest competitive intensity — horizontal productivity tools, general-purpose AI assistants, undifferentiated chatbot wrappers — are where capital has concentrated and returns will disappoint. Being early in a commodity category is not an advantage. It is expensive market research that benefits your competitors.

Frequently Asked Questions

Why are foundation model companies often poor venture investments?+
Foundation model companies face a structural venture problem: training costs require sustained capital that most venture funds can't support at scale. Competitive dynamics are intense among well-capitalized players (OpenAI, Google, Anthropic, Meta). Pricing pressure is relentless as compute costs fall. And the companies building applications on top of models often capture more durable value than the model providers themselves.
What layer of the AI stack should emerging managers focus on?+
For most emerging managers ($50-150M funds), the application layer — specifically vertical AI applications that embed deeply into specific industry workflows — offers the best risk-return profile. These companies accumulate domain context, proprietary data, and switching cost that create genuinely durable businesses. The foundation model underneath them may change. The workflow that's been rebuilt around their product cannot easily be replicated.
What is the difference between horizontal AI and vertical AI?+
Horizontal AI tools promise to do everything for everyone — general-purpose productivity, writing, analysis. Vertical AI tools promise to do one thing extraordinarily well for a specific industry — medical coding, insurance underwriting, legal contract review, freight dispatch. Vertical AI compounds over time as proprietary data, workflow integration, and compliance infrastructure accumulate. Horizontal tools compete on model quality alone, which commoditizes.
How do you evaluate defensibility in an AI startup?+
Ask how many of these five layers are in place: (1) domain expertise — does the team understand the industry deeply enough to build what a generalist can't? (2) Workflow embedding — is the product integrated into the daily tools buyers already use? (3) Proprietary data — does every transaction generate labeled training data that improves the model? (4) Compliance infrastructure — SOC 2, HIPAA, FedRAMP as applicable? (5) Switching cost — how painful would replacing this product be? One or two layers means vulnerable. All five means genuinely defensible.
📚

Read the Full Book

22 chapters on how venture capital actually works — the math, the mechanics, and the decisions that compound over time.