Nobody in Silicon Valley had this on their 2026 calendar: an American AI startup, designated a national security threat by its own government, suing the Pentagon while posting its fastest revenue growth in company history — all inside the same two-week window.
What started as a disagreement over contract language has become something neither side fully anticipated. A fight, now playing out in two federal courts simultaneously, over who holds final authority over powerful AI systems once they are woven into national defense infrastructure. The complete collapse of negotiations — and the hours that followed — is documented in the account of Anthropic’s blacklisting and OpenAI’s rapid entry.
The outcome will not only determine Anthropic’s future. It may establish the operating rules for every AI company working with governments for the next decade.
The Clause That Sparked Everything
In January 2026, Defense Secretary Pete Hegseth issued a directive requiring all military AI contracts to include an “any lawful use” clause — language ensuring that once the government purchases an AI system, it can deploy that technology for any legally authorized military purpose, with no restrictions imposed by the developer.
For most contractors, that is boilerplate. For Anthropic, it was a direct collision with the training philosophy baked into its models from day one.
Under CEO Dario Amodei, the company built its systems around Constitutional AI — a methodology that encodes ethical limits into model behavior rather than applying them as a removable filter afterward. Two of those limits became the sticking points: no fully autonomous lethal weapons systems, and no mass domestic surveillance of U.S. citizens. Anthropic treated both as non-negotiable. Pentagon officials saw it differently — as a private company attempting to dictate how the U.S. military could use a strategic asset it had purchased.
That gap never closed. The Responsible Scaling Policy underlying Anthropic’s refusal, and its broader legal implications, are examined in detail here.
February 27: The Day Everything Broke
Pentagon officials gave Anthropic a hard deadline — 5:01 PM on February 27 — to agree to revised contract terms stripping out its safety guardrails. Amodei refused, for the fifth time. Within hours, the administration moved on three fronts simultaneously.
First, the Pentagon used federal supply-chain security authorities to label Anthropic a national security risk. This blocked most government agencies and contractors from using its technology. Second, President Trump posted on Truth Social, directing all federal agencies to stop using Anthropic’s systems immediately. A 180-day phase-out applied where Claude was already in use. Third, Defense Secretary Hegseth announced that no federal contractor could maintain any commercial relationship with Anthropic. This sweeping “commercial quarantine” went far beyond standard procurement rules.
That same evening, OpenAI announced its own classified network agreement with the Pentagon, finalized within hours of Anthropic’s exclusion. Anthropic’s formal response came on March 9 — two federal lawsuits, two courts, two distinct legal theories.
The Operation Nobody Connected to the Blacklist — Until Now
The timeline looks, on its surface, like a contract dispute that escalated badly. The surrounding context tells a different story.
Operation Epic Fury — the joint U.S.-Israeli bombing campaign targeting Iranian military infrastructure and nuclear facilities — launched on February 28, 2026, one day after Anthropic’s blacklisting. CSIS’s operational analysis confirmed the strikes began at approximately 7:00 AM local time, targeting sites across nine Iranian cities.
According to multiple industry sources, Claude had already been used during pre-operation intelligence workflows — reportedly through the Palantir Maven Smart System integration — before Anthropic discovered the deployment and attempted to audit exactly how its models had been used in those targeting workflows. That audit attempt, sources say, is what pushed the Pentagon from frustration to retaliation.
The reaction inside the Defense Department was, as one official described it, unambiguous: “Once you’re inside the mission loop, you’re part of the mission. You don’t get to pull the plug.”
The timing reframes everything. The Pentagon was not negotiating in a vacuum on February 27. It was on the eve of the largest U.S. military operation in years, and it needed its AI stack fully compliant, immediately. Anthropic’s refusal, in that specific operational context, was not just a philosophical disagreement. It was a mission-day problem.
The Two Lawsuits — And Why They Are Fundamentally Different
When Anthropic filed on March 9, it filed twice — in two different courts, under two different legal theories — because the government had deployed two separate statutory weapons against it.
Suit A: Northern District of California
This complaint challenges the government’s actions under the First and Fifth Amendments and the Administrative Procedure Act. Anthropic’s central argument: its safety guardrails constitute protected speech — a publicly stated policy position — and the government is using its procurement power to punish the company for holding that position. As Nextgov/FCW reported from the filing, the lawsuit alleges the Trump administration’s actions are rooted in “pure ideological disagreement” rather than any genuine security concern — a framing that, if accepted, would significantly constrain the government’s ability to weaponize procurement policy against companies with inconvenient safety stances.
Suit B: D.C. Circuit Court of Appeals
This is the more technically consequential case. The Pentagon invoked the Federal Acquisition Supply Chain Security Act of 2018 — commonly known as FASCSA — to designate Anthropic. Because FASCSA challenges must go directly to the D.C. Circuit, Anthropic filed there separately. The statute protects the government from foreign adversaries who might subvert critical technology infrastructure. Officials had previously used it only once — against Acronis AG, a Swiss firm with genuine foreign entanglements, in September 2025.
Applying the same statute to a California-headquartered AI lab because of its acceptable use policy is, as Mayer Brown’s procurement team documented, a categorical stretch of the law’s intended scope. Section 3252 of Title 10 U.S.C. requires the Secretary of Defense to find that an adversary “may sabotage, maliciously introduce unwanted function, or otherwise subvert” a covered system. Anthropic’s lawsuit argues the government is redefining “subversion” to include a company’s own published ethical guardrails — using a statute built to catch foreign malware to punish domestic safety policy. That argument, legal analysts broadly agree, is the stronger of the two suits.
How OpenAI Threaded the Needle Anthropic Wouldn’t
When OpenAI stepped in within hours of Anthropic’s exclusion, the question from the AI community was immediate: if OpenAI genuinely shares the same red lines on autonomous weapons and mass surveillance, how did it sign a deal Anthropic couldn’t?
The answer is architectural rather than ethical. As OpenAI detailed in its official agreement post, the company accepted the “any lawful use” clause — but embedded its red lines into the technical deployment structure rather than demanding them as explicit legal carve-outs. The mechanism: cloud-only deployment.
By keeping its models running on OpenAI’s own servers rather than deploying them at the edge — on local hardware aboard drones, autonomous platforms, or air-gapped battlefield systems — OpenAI retains what amounts to a technical kill switch. If the models only run through OpenAI’s monitored cloud pipeline, usage can be tracked, classifiers updated in real time, and access theoretically revoked. TechCrunch’s breakdown of the agreement confirmed that cleared OpenAI safety researchers remain in the loop on classified deployments — something Anthropic had refused to treat as a sufficient substitute for hard legal language.
The distinction matters enormously for the autonomous weapons question specifically. A model that only runs through a monitored cloud pipeline physically cannot be embedded in a fully autonomous drone firing loop the way an edge-deployed model could. Whether that technical safeguard is as durable as Anthropic’s legal carve-out approach remains genuinely contested in the AI safety community. But it is the mechanism that allowed OpenAI to say yes where Anthropic said no — and that line now separates Pentagon partner from Pentagon adversary.
Why the Pentagon Still Cannot Quit Claude
The deepest irony of this entire conflict: the six-month phase-out grace period exists because the Pentagon genuinely cannot replace Claude overnight.
Claude-powered systems run several key operational pipelines, including Iran campaign analysis. The Pentagon has no drop-in alternative until Q4. Anthropic remains essential, grows fastest among AI vendors, and is filing two federal lawsuits against its largest former customer.
No AI company has ever occupied that position before.
The Bigger Question This Conflict Forces Into the Open
Strip away the contract filings and the courtroom strategy, and the underlying tension is structuralAI companies increasingly view themselves as stewards of technology with inherent ethical responsibilities. They believe these tools should have limits. Governments, in contrast, see the same technologies as strategic infrastructure. They want them fully available for national defense, purchased and controlled like any other weapon system.
Those two positions can coexist when the stakes are low. They cannot when the stakes involve targeting workflows, autonomous systems, and active military operations against foreign adversaries.
The Anthropic-Pentagon standoff is the first time that a collision has played out fully in public — in court filings, in press statements, in executive orders posted to Truth Social. It will not be the last. As AI systems grow more capable and more deeply embedded in geopolitical competition, every frontier lab will eventually face a version of the same question Dario Amodei answered on February 27 at 5:01 PM.
The question is no longer just what AI can do. It is who gets to decide what it won’t.