• OpenAI ships multimodal updates • EU AI Act compliance dates clarified • Anthropic releases new safety evals • NVIDIA earnings beat expectations • New open-source LLM hits SOTA on MMLU
Anthropic Mythos AI

The Pentagon Called It a Threat—Now It Can’t Function Without It

The Pentagon declared Anthropic a national security threat in February. By Friday, it had signed classified AI deals with seven of Anthropic’s competitors. And by the weekend, the White House was quietly drafting executive guidance to let federal agencies use Anthropic anyway.

That’s not a policy. That’s a confession.

The Kill Shot That Wasn’t

The standoff has a clean origin story. Anthropic refused to strip guardrails preventing Claude from being deployed in domestic surveillance and fully autonomous weapons systems. The Pentagon called those limits unworkable. Defense Secretary Pete Hegseth called CEO Dario Amodei an “ideological lunatic” during Capitol Hill testimony. A supply chain risk designation followed — a label typically reserved for foreign adversaries, not American AI companies valued at $61 billion.

Defense contractors scrambled. Certification requirements kicked in. A governmentwide phaseout directive landed. OpenAI, sensing an opportunity, inked a Pentagon deal within hours of the blacklisting. CEO Sam Altman later admitted the timing “looked opportunistic and sloppy.” It looked worse than that. It looked coordinated.

Then a federal judge issued a temporary injunction on the designation in late March. Anthropic filed suits in San Francisco and Washington. And while those cases ground forward, the intelligence community quietly kept using the company’s models.

The Mythos Moment: Why Discovery Outpaced Defense

Here’s the technical reality the Pentagon couldn’t ignore: Mythos isn’t a chatbot. It’s an agentic cybersecurity system — an autonomous agent that finds zero-day vulnerabilities and builds working exploits, often overnight, at costs under $2,000.

Anthropic’s own security research team documented this in April: Mythos Preview autonomously identified and fully exploited CVE-2026-4747, a 17-year-old remote code execution flaw in FreeBSD’s NFS server, granting unauthenticated root access with no human involvement after the initial prompt. It found a 27-year-old bug in OpenBSD. A 16-year-old flaw in FFmpeg. And it produced a browser exploit that chained four separate vulnerabilities through a JIT heap spray, escaping both the renderer and OS sandbox. Where Opus 4.6 had a near-zero autonomous exploit success rate, Mythos Preview hit 72.4%.

“We did not explicitly train Mythos Preview to have these capabilities,” Anthropic’s researchers noted. The offensive power emerged as a byproduct of general improvements in code reasoning and autonomous agency — the same leap that makes the model better at patching vulnerabilities makes it terrifyingly good at exploiting them.

Anthropic launched Project Glasswing alongside the model, a restricted defensive initiative giving access to AWS, Apple, Cisco, CrowdStrike, Google, JPMorgan Chase, Microsoft, NVIDIA, and Palo Alto Networks. Treasury Secretary Scott Bessent separately encouraged major bank CEOs to test it. The NSA, which sits inside the DOD, reportedly started using Mythos anyway, supply chain designation or not.

Pentagon CTO Emil Michael tried holding the line Friday, calling Anthropic still blacklisted but Mythos “a separate national security moment.” That’s a distinction with no legal basis. And it’s exactly the kind of regulatory improvisation that GWU’s Jessica Tillipman, associate dean for government procurement law studies, warned about: when agencies regulate by contract negotiation, they accumulate enormous de facto policy power with zero formal accountability.

The Awkward Company at the Table

Friday’s Pentagon announcement — deals with SpaceX, OpenAI, Google, NVIDIA, Reflection AI, Microsoft, AWS, and Oracle to run AI at Impact Level 6 and IL7 on classified networks — was designed to signal that Anthropic is replaceable. It didn’t land that way.

Reflection AI’s inclusion drew immediate scrutiny. The lesser-known startup raised $2 billion in October and is backed by 1789 Capital, a venture firm where Donald Trump Jr. serves as a partner. That kind of proximity to the administration tells you more about how these deals get made than any capability benchmark does. Anthropic, meanwhile, saw its annual revenue run rate jump from $9 billion at year-end 2025 to $30 billion — driven overwhelmingly by enterprise clients like JPMorgan. The Intel 471 security analysis of Mythos found that the volume of AI-discovered vulnerabilities will “continue to rise sharply” through 2026, regardless of which model the Pentagon officially blesses.

The private sector stepped in where Washington stepped back. The Pentagon tried to manufacture leverage. Instead, it handed Anthropic a global enterprise sales cycle.

What Federal Contractors Should Watch

The White House’s draft executive guidance — still in flux, not yet final — could give agencies a formal bypass mechanism for the supply chain risk designation. That would be a practical fix dressed up as policy. It won’t resolve the underlying legal battles, which are ongoing in two courts. It won’t change the Pentagon’s institutional disdain for Amodei’s guardrails. And it won’t answer the deeper question the Mythos situation has forced into the open: who actually gets to decide what AI can do to Americans, and on whose authority?

The irony isn’t lost on anyone watching closely. Washington spent three months trying to sideline the company that built a zero-day engine that could harden — or destroy — its own classified networks. It didn’t work. The question now is whether the administration can construct a policy framework coherent enough to match the technology it can no longer afford to ignore.

Related: Anthropic vs. Pentagon: The $20B Showdown Reshaping AI Power in America

Tags: