News

The Neutrality Trap: How OpenAI Got the Deal Anthropic Was Fired for Demanding

For years, the biggest names in artificial intelligence played an elaborate game of geopolitical avoidance. The leading labs talked about AI for good. They wrote responsible use policies. They hired ethicists. And they quietly watched the Department of Defense grow hungrier for the kind of frontier technology only they could provide. Last week, OpenAI stopped playing the game and sat down at the table. What it walked away with looks a lot like what its chief competitor was blacklisted for asking for.

The formal agreement between OpenAI and the newly rebranded Department of War is more than a procurement deal. It is a signal — a blunt, consequential one — that the era of Silicon Valley's neutrality on national security is over. What makes the agreement remarkable, though, is not that it happened, but the architecture OpenAI built around it, and the question that architecture raises: why did the Pentagon hand OpenAI a contract containing the very safeguards it had just used to justify banning Anthropic from federal contracting entirely?

The Company That Got Fired First
To understand the OpenAI deal, you have to start with the one that fell apart. Anthropic had been operating under a roughly $200 million contract involving a classified version of its Claude model on sensitive government networks. The Pentagon sought new contract language to preserve its right to use purchased AI "for all lawful purposes."

Anthropic drew two firm lines in response: it would not agree to terms that could enable mass domestic surveillance of Americans, and it would not permit fully autonomous weapons with no human involvement. On the latter point, the company argued that current frontier models are simply not reliable enough to be deployed in weapons that can select and engage targets without human control.

Negotiations stretched for weeks. Defense Secretary Pete Hegseth set a 5:01 p.m. Friday deadline. The talks collapsed in the final hours amid what people familiar with the discussions described as mutual distrust and an increasingly public exchange of accusations. President Trump posted on social media that elected leadership, not a private company, should decide how the military fights wars. Hegseth followed by designating Anthropic a "supply-chain risk to national security," a step that bars the company from Pentagon contracting. Anthropic said it would challenge the designation in court.

The message from Washington seemed clear: AI companies do not get to set limits on how the military uses the tools it buys.

Then OpenAI got exactly that.

The Ghost in the Room
Before getting to the irony, it helps to understand why OpenAI moved the way it did. You have to go back to 2018, and a contract called Project Maven. (Earlier reporting here.) Google agreed to help the Pentagon analyze drone footage using machine learning. When employees found out, they revolted. Thousands signed petitions. Prominent engineers quit. The backlash was swift enough to push Google into a full retreat and the eventual publication of a set of AI Principles that sharply curtailed its military ambitions.

OpenAI has spent considerable energy trying not to repeat that story. Where Google's response to Maven was reactive, OpenAI's approach is deliberately preemptive. Instead of waiting for a scandal and retreating, the company negotiated specific legal language into the foundation of the deal itself. The bet is that you can serve national security interests without lighting your own workforce on fire, so long as the rules are ironclad from day one.

The Deal Anthropic Couldn't Get
The centerpiece of OpenAI's public case for the agreement is what the company calls its legal firewall, and it covers strikingly similar ground to the protections Anthropic demanded and was denied. The contract includes explicit prohibitions on the "intentional" use of AI systems for the domestic surveillance of U.S. persons and nationals. The language is drafted to map directly onto the Fourth Amendment, the National Security Act of 1947, and the FISA Act of 1978.

The agreement also wades into one of the most contested areas of modern intelligence: commercially acquired personal data. It makes explicit that the Department of War understands the terms to "prohibit deliberate tracking, surveillance, or monitoring" of U.S. persons through such data streams.

On paper, this is Anthropic's first red line, written into a binding federal contract. Neither the OpenAI agreement nor the public reporting on it addresses whether the deal also covers Anthropic's second demand: a prohibition on fully autonomous lethal weapons with no human involvement. That gap is worth watching.

The question of why the Pentagon accepted these terms from OpenAI after refusing them from Anthropic remains unanswered in the public record. OpenAI may have framed the language more narrowly. The political dynamics may have shifted in the weeks between the two negotiations. Or the administration may have simply decided it needed a deal with someone. What is clear is that the outcome cuts against the administration's stated rationale for blacklisting Anthropic: that private companies cannot dictate how the military uses the tools it purchases.

A Wall Around the Spies
Perhaps the most striking element of the OpenAI agreement is the carve-out for the intelligence community. Despite the Department of War's enormous institutional reach, OpenAI has drawn a line around signals intelligence agencies. The company noted that its services will not be used by Department of War intelligence agencies like the NSA, and that any such arrangement "would require a new agreement."

The effect is a two-tiered system: OpenAI is willing to support logistics, strategy, and defense operations, but it is keeping itself at arm's length from the surveillance apparatus that has generated the most sustained controversy in the post-Snowden era.

The Working Group Play
The more interesting play isn't the contract itself. It's the governance structure OpenAI negotiated alongside it. The deal calls for the Department of War to convene a specialized working group composed of leaders from frontier AI labs and cloud providers alongside representatives from the Department's policy and operational communities.

OpenAI describes this group as "an important forum for ongoing dialogue on emerging AI capabilities, privacy, and national security challenges going forward." The subtext is legible: by embedding its own people inside the policy-making process, OpenAI is positioning itself as a co-author of the rules governing AI in warfare, not merely a contractor subject to them.

Whether the contractual guardrails hold under real operational pressure remains an open question. Contracts get reinterpreted. Administrations change. Working groups get captured. And the company whose demands most closely resembled OpenAI's is currently preparing a court challenge after being shut out of federal contracting altogether.

OpenAI seems to be betting on a specific theory of influence: that the best way to keep dangerous applications from emerging is to stay close enough to the decision-makers to stop them. Anthropic tried to enforce that principle from outside the room. OpenAI is trying to do it from the inside. In 2026, we will find out which strategy holds.

Related: "Anthropic's Claude Tops App Store After Pentagon Dispute Spurs ChatGPT Backlash."

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured