When Capital Flooded In
When AI became the dominant venture theme, money flooded in at every layer of the stack. Foundation model companies raised billions. Infrastructure tooling raised hundreds of millions. Orchestration frameworks, vector databases, fine-tuning platforms: everything touched by AI attracted capital at extraordinary speed.
Most of that capital will not generate the returns investors expected. Not because AI isn't transformative — it clearly is — but because investors confused “this technology is important” with “this company is defensible.” Those are very different claims.
The Foundation Model Trap
Foundation model companies are remarkable technical achievements. GPT-4, Claude, Gemini — these are genuinely transformative systems. As venture investments, however, they carry a structural problem that no amount of technical achievement resolves.
Training costs require sustained capital that most venture funds can't support at scale. A foundation model that costs $100M to train today will cost $500M to train the next generation. The competitors are not startups — they are Google, Microsoft, Meta, and Amazon, each with virtually unlimited compute budgets. Pricing pressure is relentless as open-source alternatives improve. The companies building applications on top of the models often capture more durable value than the model providers themselves.
Where the Stack Actually Matters
The AI stack runs roughly from bottom to top: compute infrastructure, foundation models, orchestration and tooling, and application layer. Value migrates upward over time. Infrastructure commoditizes as cloud providers compete on price. Foundation models commoditize as open-source catches up to proprietary. The application layer — where context, workflow, and switching cost accumulate — is where enduring value lives.
For emerging managers, the risk-return profile makes the most sense at the application layer. Application companies that embed deeply into specific vertical workflows accumulate domain context, proprietary data, and switching cost that create genuinely durable businesses. The foundation model underneath them may change. The orchestration layer may be replaced. But the workflow that's been rebuilt around their product, the compliance infrastructure in their deployment, and the years of domain-specific training data they've accumulated cannot easily be replicated by a competitor with access to the same underlying model.
The Key Principle
Models commoditize. Context compounds. Invest accordingly.
The Convergence with Emerging Manager Math
This is where the AI investment thesis converges with the emerging manager math from Part I. A vertical AI company with embedded workflows, proprietary data, and high switching cost doesn't need a $5B exit to move a $75M fund. It needs to be genuinely hard to displace — and that kind of durability starts at the application layer, not the infrastructure layer.
The AI categories with the highest defensibility and the lowest competitive intensity — the upper-left quadrant — are vertical application companies serving industries where data is regulated, workflows are complex, and incumbents are slow to change. Healthcare, legal, insurance, logistics, defense. These are the categories where the application layer moat compounds fastest.
The AI categories with the lowest defensibility and the highest competitive intensity — horizontal productivity tools, general-purpose AI assistants, undifferentiated chatbot wrappers — are where capital has concentrated and returns will disappoint. Being early in a commodity category is not an advantage. It is expensive market research that benefits your competitors.