• OpenAI ships multimodal updates • EU AI Act compliance dates clarified • Anthropic releases new safety evals • NVIDIA earnings beat expectations • New open-source LLM hits SOTA on MMLU
Google Pentagon AI deal

Google Let the Pentagon Use Its AI — And Gave Up Control in the Process

On Monday at 4 p.m., Google and the United States Department of Defense finalized an agreement allowing Google’s AI systems to be used for classified military work.

The timing wasn’t subtle.

Just one day earlier, more than 600 Google employees had signed an open letter urging CEO Sundar Pichai to reject the deal. Their argument was straightforward: refusing classified military work was the only way to ensure Google’s AI wouldn’t be misused.

Pichai didn’t refuse.

The deal is now signed. And with it, Google has crossed a line it once publicly drew.

The Deal, in Plain Terms

This wasn’t a brand-new contract. Google had already allowed its AI models to be used on unclassified government data.

What changed is where that AI can now operate.

Under the updated agreement, the Pentagon can deploy Gemini inside classified networks for “any lawful government purpose.”

That phrase is doing most of the work.

Classified systems handle:

  • Intelligence analysis
  • Mission planning
  • Operational decision support
  • Potentially, targeting workflows

The contract includes language stating the AI is not intended for domestic mass surveillance or autonomous weapons without human oversight.

But “not intended” is not the same as “technically prevented.”

And critically, Google cannot veto how the system is used — as long as that use is deemed lawful.

The Clause That Matters: “Lawful Purpose”

At first glance, “lawful purpose” sounds like a safeguard.

In practice, it’s a transfer of responsibility.

It means:

  • The government defines what is lawful
  • The government determines compliance
  • The government executes usage

Google provides the infrastructure — but not the enforcement.

This isn’t unique to Google. It’s quickly becoming the industry standard across major AI labs working with defense.

But that standard has a flaw.

The Enforcement Problem Nobody Wants to Talk About

The real issue isn’t what the contract says.

It’s what Google cannot see.

Classified military systems are air-gapped by design. Once deployed:

  • Google cannot view prompts
  • Google cannot monitor outputs
  • Google cannot audit usage
  • Google cannot intervene in real time

That eliminates the core mechanisms modern AI safety depends on.

In commercial environments, AI systems are continuously monitored, evaluated, and updated based on usage patterns. In classified environments, that feedback loop disappears.

This creates a new category of risk:

Unobservable AI deployment — where compliance is defined legally, but unverifiable technically.

In that context, phrases like “should not be used for” aren’t safeguards.

They’re disclaimers.

From Project Maven to Gemini: A Clear Trajectory

To understand how significant this shift is, you have to go back to Project Maven.

In 2018, Google faced internal backlash for working on AI tools that helped analyze drone footage. The company ultimately chose not to renew the contract and introduced AI principles that explicitly ruled out weapons and surveillance applications.

That position didn’t hold.

In February 2025, Google quietly removed the clause prohibiting those use cases, citing increased global competition in AI.

Each step since then has been incremental:

  • Allow AI in government workflows
  • Expand to broader use cases
  • Now, deploy inside classified systems

Individually, each move was defensible.

Together, they form a clear direction:

Google is no longer avoiding military AI. It is integrating into it.

The Drone Swarm Exit Nobody Noticed

There’s one detail that complicates the narrative.

Google had advanced in a $100 million Pentagon challenge to build systems that allow commanders to control autonomous drone swarms using voice commands.

Then it walked away.

Officially, the reason was resourcing. Internally, reports point to an ethics review.

So Google declined to build a direct drone control interface — but approved giving its AI to the systems that inform those decisions.

That’s not a contradiction.

It’s a boundary.

Google isn’t avoiding military applications. It’s avoiding the most visible versions of them.

The Anthropic Contrast

Not every AI company is taking the same path.

Anthropic has reportedly resisted granting unrestricted “lawful purpose” access in similar negotiations — a stance that has created friction with defense partners.

The tradeoff is clear:

  • Google/industry standard: broader access, less control
  • Anthropic approach: tighter control, less adoption

Google’s decision positions it closer to traditional defense contractors — prioritizing integration over constraint.

But that choice comes with a different kind of risk: internal pushback and potential talent loss from researchers who view these boundaries as critical.

What the Employees Actually Understood

The 600 employees who signed the open letter weren’t reacting emotionally.

They were identifying a structural problem.

Their warning was simple: once AI systems are embedded in military workflows without enforceable safeguards, misuse becomes a question of possibility, not policy.

And policy, in this context, is downstream of capability.

Google’s public response framed the deal as a responsible, industry-standard approach to supporting national security.

That framing isn’t wrong.

But it avoids the harder truth:

Responsibility without visibility is not control.

The Real Shift

This isn’t just about one contract.

It’s about a change in how AI is governed.

  • In 2018: “We won’t build this.”
  • In 2026: “We’ll build it — and trust you to use it correctly.”

That’s the shift.

Google didn’t just enter the defense market.

It accepted a model in which AI alignment is no longer enforceable—only assumed.

Why This Moment Matters

Gemini is now inside classified military infrastructure.

The employees who objected didn’t stop it — but they did something more important: they forced the company to put its reasoning on record.

Because if something goes wrong in systems no one can audit, no one can monitor, and no one can externally verify…

The question won’t be whether safeguards existed.

It will be whether they were ever real to begin with.

Related: Anthropic Refused. The Pentagon Retaliated. Now Two Courts Will Decide Who Controls AI.

Tags: