At 5:01 p.m. on February 27, the call between Dario Amodei and Pete Hegseth ended with silence — the bad kind. Sources familiar with the meeting say Amodei had refused, for the fifth time, to sign language allowing the U.S. military to use Claude for “any lawful purpose,” including surveillance and semi-autonomous strike analysis.
By 7 p.m., the U.S. Department of Defense issued a formal designation under Section 3252, marking Anthropic as a “national-security supply-chain risk.” Two hours. That’s how quickly the most explosive AI–government conflict of 2026 ignited.
From $9B to Nearly $20B — A Growth Curve No One Can Ignore
Here’s the part that makes the feud even stranger: Anthropic is in the middle of the fastest revenue acceleration in the AI industry so far. Bloomberg confirmed that the company recently surpassed $19 billion in run-rate revenue — up from $9 billion at the end of 2025 and roughly $14 billion just weeks earlier.
Most of the surge isn’t from chatbots — it’s from Claude Code, the coding and automation engine that quietly became the preferred tool for high-compliance enterprises. Developer usage alone pushed per-user monetization to 8x higher than OpenAI’s, according to investors familiar with the books.
One senior cloud architect at Amazon put it bluntly: “If you rip out Claude Code tomorrow, half of our gen-AI automation stack falls over.”
This dependence would have been unremarkable — if the Pentagon hadn’t just told federal contractors to begin phasing the technology out. The full scale of that developer disruption is detailed in the cascade effect already hitting Washington’s defense contractor ecosystem.
The Trigger Event: Palantir, Caracas, and a Classified Briefing Gone Wrong
Multiple industry sources say the conflict didn’t begin with a contract clause. It began in Venezuelan airspace.
Late last year, analysts using a Palantir platform allegedly relied on Anthropic models during a pre-operation intelligence workflow connected to the Caracas raid. When Anthropic discovered this, Amodei reportedly confronted Pentagon officials, arguing the use violated safety principles and internal deployment rules — a dispute that has since become central to understanding how the blacklist unfolded.
It was the first time a frontier AI company openly questioned how its models were being used inside a classified environment. The reaction inside the Pentagon was, according to one official: “Fury. Complete fury. Once you’re inside the mission loop, you’re part of the mission. You don’t get to pull the plug.”
From that moment, the relationship began deteriorating.
When Negotiations Collapsed, OpenAI Walked In
Here’s the part that stunned the Valley: within hours of Anthropic walking away, OpenAI signed an agreement with the Pentagon using similar guardrails — but accepting the “lawful purpose” clause Anthropic rejected.
As Fortune reported, OpenAI CEO Sam Altman announced the deal hours after Anthropic’s designation, with the company claiming its agreement contains the same two red lines Anthropic had insisted on — but embedded technically rather than written explicitly into law.
Internally, OpenAI leadership told staff the obvious truth: “We cannot control how the Pentagon chooses to use our technology.” The contrast couldn’t have been clearer. Anthropic drew a line. OpenAI drew a signature. And in Washington, signatures win. The full strategic divergence between the two approaches is mapped out in the breakdown of how OpenAI ultimately secured the contract.
Section 3252: The Technical Detail That Changes Everything
The supply-chain risk designation isn’t symbolic — it’s procedural. As Axios first reported, Section 3252 allows the Department of Defense to flag a vendor as a potential national-security threat, require federal contractors to remove the vendor’s technology, halt new deployments for up to 24 months, and trigger a review by the Federal Acquisition Security Council.
For the AI industry, this is the equivalent of grounding a commercial aircraft over a maintenance dispute. Legal experts at Lawfare have argued the designation has serious procedural problems — noting that Section 3252 was built to address foreign adversary threats, not to penalize a domestic company over a contract dispute. Mayer Brown’s analysis similarly flagged that the secondary boycott of Anthropic’s commercial clients appears to exceed what the statute actually authorizes.
Anthropic plans to challenge the designation in court. The filing is already being drafted. The Responsible Scaling Policy that underpins Anthropic’s refusal — and its legal exposure.
Developer Flight: The Substory No One Is Talking About
In the last 72 hours, something else began happening. Developers inside AWS, Google, and several defense contractors started quietly shifting internal tooling away from Claude — not because they prefer alternatives, but because they’re afraid of compliance audits.
One senior engineer at a defense-adjacent consultancy said: “Teams are freezing Claude usage until legal gives new guidance. This is happening everywhere.”
This is the part of the story with long-term consequences. Developer momentum is the real infrastructure of an AI ecosystem. Lose that, and revenue is just a lagging indicator. Just Security’s legal analysis notes that even if Anthropic ultimately prevails in court, the business damage accumulates in the interim — with every general counsel at every Pentagon-adjacent firm now weighing whether Claude is worth the compliance risk.
Why the Pentagon Can’t Quit Claude — At Least Not Yet
Here’s the irony: the Pentagon knows Claude remains essential. The phase-out order includes a six-month grace period — a rare window — because several ongoing operations rely on Claude-powered analysis pipelines.
One defense official admitted privately: “We can’t replace Claude overnight. There’s no drop-in alternative until Q4.”
So Anthropic is simultaneously banned, needed, growing faster than any AI vendor in the U.S., and preparing to sue the Department of Defense. No AI company has ever been in this position.
Information-Gain Timeline: How the Breakdown Happened
January 2026 — Early contract drafts exchanged. Anthropic objects to “lawful purpose” clause.
February 12 — Palantir incident surfaces internally. Tension escalates.
February 21 — DoD lawyers introduce expanded usage rights. Anthropic rejects them.
February 27, 5:01 p.m. — Final call between Amodei and Hegseth collapses.
February 27, 7:00 p.m. — Section 3252 designation issued.
February 28 — OpenAI signs its Pentagon agreement.
March 3–5 — Anthropic hits ~$20B ARR and begins internal legal review.
March 6 — Negotiations quietly reopen through intermediaries.
What Does This All Mean for the Future of AI Power
This showdown isn’t about paperwork. It’s about control.
The Pentagon wants unrestricted AI for national-security use. Anthropic wants enforceable ethical limits. OpenAI wants the contract. Silicon Valley wants predictability. Developers want stability. And investors want ARR that doesn’t depend on political weather.
Someone is going to lose. But the AI industry won’t return to the pre-February status quo — not after a frontier model vendor openly challenged U.S. defense usage.
This is the moment the AI sector discovered it wasn’t building software anymore. It was building power.