AI & TechnologyMay 8, 2026ยท9 min read

Big Tech AI Capex in 2025: Microsoft, Google, Meta, Amazon and the Spending Race

The four hyperscalers committed over $300 billion to AI infrastructure in 2025 โ€” more than the GDP of most countries. Here is what each company is buying, why the number keeps going up, and what it means for anyone building on top of AI.

TC
Trace Cohen
3x founder, 65+ investments, building Value Add VC

Quick Answer

Microsoft, Google (Alphabet), Meta, and Amazon collectively committed over $300 billion in AI capital expenditures for 2025 โ€” Microsoft at ~$80B, Alphabet at $75B, Meta at $60โ€“65B, and Amazon at $80B+. The spending is concentrated on GPU clusters, custom AI silicon, and data center construction to train and serve large language models at scale.

Microsoft, Alphabet, Meta, and Amazon committed a combined $300B+ to AI infrastructure in 2025 โ€” the largest coordinated technology buildout in history, and it is still accelerating.

This is not defensive spending or R&D hedging. Every dollar is tied to a specific bet: that AI compute will be the scarce resource that determines who wins cloud, enterprise software, and consumer products for the next decade.

Big Tech AI Capex 2025: The Numbers by Company

Company2024 Capex2025 Capex (Guided)YoY Change
Microsoft$53B~$80B+51%
Alphabet (Google)$52B$75B+44%
Meta$38B$60โ€“65B+60%
Amazon (AWS)$83B$80B+~flat/up
Combined~$226B$295โ€“300B++32%

Sources: Company earnings reports, investor guidance, and public filings. Amazon figure includes total company capex, not AWS-only.

What Each Company Is Actually Buying

Microsoft (~$80B)

  • โ†’GPU clusters powering Azure OpenAI Service โ€” running GPT-4o, o3, and next-gen models at enterprise scale
  • โ†’Global data center expansion: 100+ new facilities announced across US, Europe, Middle East, and Asia
  • โ†’Custom silicon research (MAIA chips) to reduce Nvidia dependency and lower per-token inference cost
  • โ†’Copilot infrastructure: every Microsoft 365 user generating AI requests requires dedicated inference capacity

Alphabet ($75B)

  • โ†’TPU v5 and v6 clusters โ€” Google's custom AI accelerators now power Gemini and Google Cloud AI services
  • โ†’Search AI integration: Google must serve AI Overviews at query scale without margin collapse
  • โ†’Waymo compute: autonomous vehicle training requires continuous large-scale simulation workloads
  • โ†’DeepMind research infrastructure for AlphaFold successors and scientific AI programs

Meta ($60โ€“65B)

  • โ†’MTIA (Meta Training and Inference Accelerator) โ€” proprietary chip to eventually replace Nvidia GPUs for ranking and recommendations
  • โ†’Llama training runs: each Llama 4 generation required clusters of 100,000+ H100s; Llama 5 will require more
  • โ†’Meta AI assistant infrastructure: serving 3B+ users across WhatsApp, Instagram, and Facebook
  • โ†’Reality Labs compute: VR/AR environments require real-time AI rendering at device edge

Amazon (~$80B+)

  • โ†’AWS data center expansion to serve Bedrock (enterprise AI), SageMaker, and direct GPU rental (P5 instances)
  • โ†’Trainium2 and Inferentia chips โ€” Amazon's custom silicon now handles significant internal ML workloads
  • โ†’Alexa+ rebuild: Amazon is rebuilding Alexa on a foundation model stack requiring substantial inference capacity
  • โ†’Anthropic partnership infrastructure: AWS is Anthropic's primary training cloud, requiring dedicated capacity commitments

Why the Microsoft, Google, Meta, Amazon AI Capex Race Keeps Escalating

Three structural forces prevent any company from pulling back unilaterally:

Training cost scaling

Each frontier model generation requires 10โ€“100x more compute than the last. Skipping a training cycle means falling behind on capabilities that are now core to product differentiation.

Inference demand explosion

Deployed AI products โ€” Copilot, Gemini, Meta AI โ€” generate billions of queries per day. Inference at this scale requires more compute than training. Under-provisioning means slower responses and higher per-query costs.

Custom silicon race

Every company paying Nvidia $30,000โ€“$40,000 per H100 is motivated to develop proprietary chips. But building silicon takes 3โ€“5 years and billions in R&D โ€” so capex now buys strategic independence later.

Nvidia Is the Real Beneficiary โ€” For Now

Nvidia's data center revenue grew from $15B in FY2023 to $115B in FY2025 โ€” largely on the back of hyperscaler capex. In Q4 FY2025 alone, Nvidia generated $35B in data center revenue, with the four major hyperscalers accounting for an estimated 40โ€“50% of total GPU purchases.

But every dollar Microsoft spends on MAIA, Meta spends on MTIA, Google spends on TPUs, and Amazon spends on Trainium is a dollar that will eventually stop flowing to Nvidia. The custom silicon programs are 3โ€“5 year bets, not current-quarter disruptions. For now, Nvidia captures the capex cycle. The question is whether its moat survives the transition to proprietary silicon at scale.

Track the real-time data on our Big Tech Earnings Dashboard and AI Spending Tracker.

What This Means for Startups Building on AI

Tailwinds

  • โœ“ API costs continue falling as hyperscalers over-provision capacity
  • โœ“ More capable foundation models available at every price point
  • โœ“ Hyperscalers incentivized to court AI startups as platform anchors
  • โœ“ Enterprise buyers getting comfortable with AI procurement cycles

Headwinds

  • โœ• Hyperscalers building native AI features into core products (vertical integration threat)
  • โœ• Commodity AI becoming a checkbox feature, not a moat
  • โœ• Enterprise buyers defaulting to hyperscaler AI to simplify procurement
  • โœ• Custom silicon reduces third-party GPU availability during training windows

$300B in 2025. Possibly $400B in 2026.

The startups that win are not competing with this spending. They are making it more productive.

Track hyperscaler earnings and AI infrastructure trends on the Big Tech Earnings Dashboard at Value Add VC. Originally published in the Trace Cohen newsletter.

Frequently Asked Questions

How much is Microsoft spending on AI infrastructure in 2025?

Microsoft guided approximately $80 billion in capital expenditure for fiscal year 2025, the vast majority earmarked for AI data centers and GPU compute infrastructure. This compares to $53 billion in FY2024 โ€” a ~51% year-over-year increase. Microsoft has committed to building or leasing AI data center capacity on every continent.

What is Google's AI capex in 2025?

Alphabet announced $75 billion in planned capital expenditure for 2025, up from $52 billion in 2024 โ€” a 44% increase. The bulk goes to data centers and custom silicon (TPUs). Google also accelerated its investment in Gemini model infrastructure and expanded TPU v5 clusters for both internal workloads and Google Cloud customers.

Why is big tech spending so much on AI in 2025?

Three forces are driving hyperscaler AI capex: training costs for frontier models are scaling with compute, inference demand for deployed AI products (Copilot, Gemini, Meta AI, Alexa+) is outpacing existing capacity, and each company is building strategic moats in proprietary silicon (TPUs, Trainium, MTIA) to reduce dependency on Nvidia and cut per-token costs over time.

How does big tech AI capex compare to previous years?

Combined hyperscaler capex (Microsoft, Google, Meta, Amazon) roughly doubled from ~$160B in 2023 to ~$300B+ in 2025. The acceleration follows the commercial deployment of ChatGPT in late 2022, which revealed the enormous inference cost at scale and triggered a race to own the compute stack end-to-end.

Is the big tech AI spending race sustainable?

Near-term, yes โ€” all four companies are generating substantial free cash flow to fund these programs without taking on meaningful debt. Long-term sustainability depends on whether AI products generate revenue proportional to infrastructure cost. Microsoft Azure AI and AWS are already monetizing, but Meta and some Google AI products are still in early revenue stages. The risk is a demand plateau that leaves overcapacity.

Explore 41+ free VC tools, dashboards, and recommended startup software.