Microsoft, Alphabet, Meta, and Amazon committed a combined $300B+ to AI infrastructure in 2025 โ the largest coordinated technology buildout in history, and it is still accelerating.
This is not defensive spending or R&D hedging. Every dollar is tied to a specific bet: that AI compute will be the scarce resource that determines who wins cloud, enterprise software, and consumer products for the next decade.
Big Tech AI Capex 2025: The Numbers by Company
| Company | 2024 Capex | 2025 Capex (Guided) | YoY Change |
|---|---|---|---|
| Microsoft | $53B | ~$80B | +51% |
| Alphabet (Google) | $52B | $75B | +44% |
| Meta | $38B | $60โ65B | +60% |
| Amazon (AWS) | $83B | $80B+ | ~flat/up |
| Combined | ~$226B | $295โ300B+ | +32% |
Sources: Company earnings reports, investor guidance, and public filings. Amazon figure includes total company capex, not AWS-only.
What Each Company Is Actually Buying
Microsoft (~$80B)
- โGPU clusters powering Azure OpenAI Service โ running GPT-4o, o3, and next-gen models at enterprise scale
- โGlobal data center expansion: 100+ new facilities announced across US, Europe, Middle East, and Asia
- โCustom silicon research (MAIA chips) to reduce Nvidia dependency and lower per-token inference cost
- โCopilot infrastructure: every Microsoft 365 user generating AI requests requires dedicated inference capacity
Alphabet ($75B)
- โTPU v5 and v6 clusters โ Google's custom AI accelerators now power Gemini and Google Cloud AI services
- โSearch AI integration: Google must serve AI Overviews at query scale without margin collapse
- โWaymo compute: autonomous vehicle training requires continuous large-scale simulation workloads
- โDeepMind research infrastructure for AlphaFold successors and scientific AI programs
Meta ($60โ65B)
- โMTIA (Meta Training and Inference Accelerator) โ proprietary chip to eventually replace Nvidia GPUs for ranking and recommendations
- โLlama training runs: each Llama 4 generation required clusters of 100,000+ H100s; Llama 5 will require more
- โMeta AI assistant infrastructure: serving 3B+ users across WhatsApp, Instagram, and Facebook
- โReality Labs compute: VR/AR environments require real-time AI rendering at device edge
Amazon (~$80B+)
- โAWS data center expansion to serve Bedrock (enterprise AI), SageMaker, and direct GPU rental (P5 instances)
- โTrainium2 and Inferentia chips โ Amazon's custom silicon now handles significant internal ML workloads
- โAlexa+ rebuild: Amazon is rebuilding Alexa on a foundation model stack requiring substantial inference capacity
- โAnthropic partnership infrastructure: AWS is Anthropic's primary training cloud, requiring dedicated capacity commitments
Why the Microsoft, Google, Meta, Amazon AI Capex Race Keeps Escalating
Three structural forces prevent any company from pulling back unilaterally:
Training cost scaling
Each frontier model generation requires 10โ100x more compute than the last. Skipping a training cycle means falling behind on capabilities that are now core to product differentiation.
Inference demand explosion
Deployed AI products โ Copilot, Gemini, Meta AI โ generate billions of queries per day. Inference at this scale requires more compute than training. Under-provisioning means slower responses and higher per-query costs.
Custom silicon race
Every company paying Nvidia $30,000โ$40,000 per H100 is motivated to develop proprietary chips. But building silicon takes 3โ5 years and billions in R&D โ so capex now buys strategic independence later.
Nvidia Is the Real Beneficiary โ For Now
Nvidia's data center revenue grew from $15B in FY2023 to $115B in FY2025 โ largely on the back of hyperscaler capex. In Q4 FY2025 alone, Nvidia generated $35B in data center revenue, with the four major hyperscalers accounting for an estimated 40โ50% of total GPU purchases.
But every dollar Microsoft spends on MAIA, Meta spends on MTIA, Google spends on TPUs, and Amazon spends on Trainium is a dollar that will eventually stop flowing to Nvidia. The custom silicon programs are 3โ5 year bets, not current-quarter disruptions. For now, Nvidia captures the capex cycle. The question is whether its moat survives the transition to proprietary silicon at scale.
Track the real-time data on our Big Tech Earnings Dashboard and AI Spending Tracker.
What This Means for Startups Building on AI
Tailwinds
- โ API costs continue falling as hyperscalers over-provision capacity
- โ More capable foundation models available at every price point
- โ Hyperscalers incentivized to court AI startups as platform anchors
- โ Enterprise buyers getting comfortable with AI procurement cycles
Headwinds
- โ Hyperscalers building native AI features into core products (vertical integration threat)
- โ Commodity AI becoming a checkbox feature, not a moat
- โ Enterprise buyers defaulting to hyperscaler AI to simplify procurement
- โ Custom silicon reduces third-party GPU availability during training windows
$300B in 2025. Possibly $400B in 2026.
The startups that win are not competing with this spending. They are making it more productive.
Track hyperscaler earnings and AI infrastructure trends on the Big Tech Earnings Dashboard at Value Add VC. Originally published in the Trace Cohen newsletter.