Most coverage of Google’s $40 billion Anthropic deal is treating it as a paradox. A tech giant is funding its own competitor. A search company hedging against its own disruption. A strange bet in a strange industry.
That’s the wrong frame. This isn’t a paradox. It’s infrastructure economics playing out in real time — and the numbers are far more revealing than the headlines.
The Deal, Stripped Down
Google confirmed April 24 that it’s committing $10 billion to Anthropic immediately, at a $350 billion valuation matching Anthropic’s February funding round price. Another $30 billion follows if Anthropic hits undisclosed performance milestones.
The money matters less than what comes with it. The deal deepens Anthropic’s commitment to expand its use of Google Cloud technologies — including up to one million TPUs, which Google Cloud CEO Thomas Kurian described as a capacity expansion that will bring “well over a gigawatt” online in 2026, scaling toward 5 gigawatts by 2027. To put that in perspective: 5GW is enough electricity to power roughly 3.75 million homes. That’s not a server farm. That’s a national grid commitment dedicated to running Claude.
This is Google’s biggest external TPU deployment ever. Anthropic confirmed it in November 2025; Friday’s investment formalizes the financial architecture underneath it.
Why TPUs — and Why It Matters for Claude’s Roadmap
Here’s what almost nobody covering this deal has explained: Anthropic isn’t choosing Google’s chips out of loyalty. It’s choosing them because the math is brutal and clear.
Google’s TPU v6 Trillium generation delivers roughly 4x better performance per dollar compared to NVIDIA H100 GPUs for the large language model inference workloads Claude runs at scale. The power draw is also dramatically lower — about 300W per chip versus the H100’s 700W. When you’re running hundreds of thousands of chips simultaneously, that efficiency gap compounds into hundreds of millions of dollars annually.
Midjourney ran the numbers and moved the majority of its inference fleet from H100 clusters to TPU v6e pods. Monthly inference spend dropped from roughly $2.1 million to under $700,000 — a 65% reduction while maintaining output volume. Anthropic’s infrastructure led saw the same economics and went further, committing to up to one million chips by 2027 in what became the largest TPU deal in Google’s history.
The newest generation goes further still. Google announced the TPU 8t (for training) and TPU 8i (for inference) this week — with the training chip delivering up to 3x the performance of the previous generation at the same price, and the inference chip using 384MB of SRAM per chip (triple the previous generation) to handle millions of simultaneous agent queries with low latency. That architecture maps directly onto Anthropic’s roadmap: Claude Code, Anthropic’s coding agent, has driven a surge in always-on, high-throughput inference demand that GPU clusters handle less efficiently than TPU pods optimized for exactly this workload.
Anthropic’s compute strategy isn’t Google-exclusive, though. The company runs a deliberate three-platform approach: Google TPUs for inference at scale, Amazon Trainium for primary training (via Project Rainier, a cluster of hundreds of thousands of chips across multiple U.S. data centers), and NVIDIA GPUs for experimental workloads. The $40 billion Google deal expands one leg of that stool. The $25 billion Amazon commitment announced the same week expands another.
The Regulatory Shadow Nobody Wants to Discuss
There’s a conversation happening in federal courtrooms that this deal just made more complicated.
The DOJ’s antitrust case against Google initially proposed that Google unwind its investments in AI startups — including Anthropic. Regulators walked that back in early 2025, settling instead on a requirement that Google give advance notice before any new AI investments. The FTC separately launched a 6(b) inquiry into the Google-Anthropic, Amazon-Anthropic, and Microsoft-OpenAI deals to examine how these “partnership-not-merger” structures affect competition.
Meanwhile, the EU’s Competition Commissioner noted in December 2025 that “many of the risks we warned about are now beginning to materialize” in generative AI markets, with active investigations open into both Meta and Google.
Friday’s $40 billion announcement lands directly inside this scrutiny. The deal was structured — deliberately — to avoid triggering merger review thresholds. Google remains a minority investor with no board control. Anthropic argued in court filings that these investment structures “benefit, not harm, AI competition.” The DOJ’s current position is to watch, not block.
That posture could shift. The DOJ created a new AI task force in January 2026 specifically to pursue enforcement in this space. A $40 billion commitment from one of the world’s largest companies to an AI lab it relies on for cloud revenue — while also competing with it in the model market — is precisely the kind of structure regulators built that task force to examine.
Who’s Betting What: The Capital Commitment Landscape
The scale of these deals only makes sense in comparison:
| Partner | Anthropic Commitment | Structure |
|---|---|---|
| Up to $40B | $10B now + $30B milestone-gated | |
| Amazon | Up to $25B | $5B now + $20B milestone-gated + $100B AWS spend commitment |
| Broadcom/CoreWeave | Multi-year capacity deals | Compute, not cash |
For context: Microsoft’s total investment in OpenAI is estimated at $13 billion. Google has now committed to nearly triple that figure for a single rival lab — while also competing against that lab’s products every day with Gemini.
Anthropic serves more than 300,000 business customers, and its large accounts — those generating more than $100,000 in annual revenue — grew nearly 7x in the past year. Claude Code drove much of that growth. The demand is real. So is the infrastructure gap it’s exposing.
What to Watch
Anthropic is reportedly eyeing an IPO as early as October 2026. If that happens, Google’s $10 billion entry at a $350 billion valuation starts to look like a floor, not a ceiling — VC firms have already offered to invest at valuations of $800 billion or more, according to Bloomberg. The milestone-gated $30 billion gives Google upside exposure without requiring it to price that optimism today.
The regulatory picture is the wild card. A $40 billion commitment, structured to avoid merger review, made to a direct competitor, deepening infrastructure dependency — that’s a test case for how antitrust enforcement adapts to the AI investment era. The DOJ’s task force is watching. The EU’s competition regulators are watching.
The power grid commitments are already being built.
Related: OpenAI’s Secret “Countries Plan”: The Strategy That Predicted the AI Power Race