News

Pentagon Bans Anthropic After AI Maker Refuses to Drop Limits on Autonomous Weapons and Mass Surveillance

A contract dispute between the U.S. Department of Defense (DoD) and the artificial intelligence company Anthropic has escalated into a broader test of who sets limits on military uses of advanced AI: the government that buys the technology or the private firms that build it.

President Donald Trump on Friday ordered federal agencies to stop using Anthropic’s AI technology after the company refused to accept contract language the Pentagon said would preserve its ability to use purchased AI “for all lawful purposes,” according to statements and accounts from people briefed on the negotiations.

Defense Secretary Pete Hegseth later designated Anthropic a “supply-chain risk to national security,” a step that would bar the company and, in some cases, firms that work with it from Pentagon contracting. Anthropic said it would challenge the designation in court.

Anthropic said it supports lawful national security uses of AI, but it would not agree to terms that could enable two categories of use: mass domestic surveillance of Americans and fully autonomous weapons with no human involvement. The company said current frontier models are not reliable enough to be deployed in weapons that can select and engage targets without human control, and that mass surveillance of Americans would violate fundamental rights.

The Pentagon has said it does not plan to use Anthropic’s technology for those purposes, but it has argued that a private contractor cannot dictate how the U.S. military lawfully uses tools it purchases for national security missions.

Deadline diplomacy and a breakdown
The dispute centered on a roughly $200 million contract involving a classified version of Anthropic’s Claude model used on sensitive government networks. Negotiations stretched for weeks and tightened as Hegseth set a 5:01 p.m. Friday deadline for the sides to reach an agreement.

People familiar with the talks said negotiators narrowed the differences to a small set of terms covering surveillance and permissible data sources, but the talks deteriorated in the final hours amid mutual distrust and an increasingly public exchange of accusations.

Trump weighed in shortly before the deadline with a social media post attacking Anthropic and asserting that elected leadership, not a private company, should decide how the military fights wars. Hegseth followed after the deadline by announcing the supply-chain designation.

Democratic lawmakers criticized the administration’s move as politicizing national security procurement. Senator Mark Warner of Virginia, the top Democrat on the Senate Intelligence Committee, said halting use of a leading American AI system across the federal government raised questions about whether decisions were being driven by politics rather than careful analysis.

Operational stakes for intelligence and defense work
Current and former officials say AI models like Claude are already used to accelerate intelligence analysis, including helping analysts search for patterns across large volumes of classified reporting and communications. They warn that forcing a rapid transition to a different model would disrupt workflows and could temporarily degrade analytical speed and coverage.

The Pentagon has been exploring other AI systems for classified environments, and officials say alternatives are available. But switching tools across secure networks is technically complex, and agencies will need time to test, validate, and retrain users on any replacement model.

The wider policy clash over AI in warfare
The confrontation comes as the administration pushes to expand AI integration across military planning, logistics, intelligence analysis, and weapons development. Hegseth has argued that software and AI will be central to maintaining a competitive edge against rivals, even as traditional defense procurement has struggled with delays and cost overruns.

Anthropic, by contrast, has sought to build its reputation around safety-focused development and the idea that some AI uses require enforceable guardrails, not just policy assurances. The company’s stance reflects a broader debate within the technology sector over how to prevent advanced models from enabling harmful surveillance, automated targeting, or other high-risk deployments.

Experts in AI policy and military ethics say the dispute underscores a fundamental tension: national security agencies want the flexibility to adapt new tools to evolving threats, while AI developers fear that unfettered use could lead to political backlash, legal exposure, or catastrophic failure if systems behave unpredictably in high-stakes environments.

Legal and contracting precedent
Government contracts specialists said the supply-chain designation, if sustained, could set a significant precedent by using a mechanism associated with national security risk to exclude a domestic technology supplier during a contract dispute. Anthropic said such a move would be legally unsound and could chill future cooperation between innovative AI firms and the federal government.

The administration has also signaled that agencies will have a transition period to phase out Anthropic systems, though officials have not provided a detailed timeline for when all federal use would end or which systems would be replaced first.

Silicon Valley divides, employee pushback
The confrontation has also widened strains between Washington and the tech sector, which had appeared aligned with Trump’s pro-AI agenda. The New York Times reported that employees at OpenAI and Google signed letters urging company leaders to resist Pentagon demands, with some technologists warning that powerful AI could be misused for surveillance. The Times also noted that it has sued OpenAI and Microsoft over alleged copyright infringement related to AI systems; the companies deny the claims.

The dispute comes as earlier global enthusiasm for AI safety initiatives has cooled. The Times reported that the Trump administration revoked some Biden-era safety policies, moved to undermine state AI laws through a December executive order, and lifted certain restrictions on the export of AI semiconductors despite concerns about rivals such as China. The Times reported the European Union has been considering rolling back parts of its 2024 AI regulation and that U.N. efforts to limit certain AI weapons have stalled amid opposition from the United States, Russia, and others.

Consumer backlash and a bump for Claude
Away from Washington, the standoff has spilled into public culture and consumer behavior. Axios reported that Claude rose to No. 1 in U.S. app downloads on Saturday, overtaking ChatGPT, after the Pentagon blacklisted Anthropic and after social media calls to abandon OpenAI circulated following its agreement with the DoD provide AI for classified systems.

Axios cited the growth of an Instagram account called “quitGPT,” a highly upvoted Reddit post urging users to cancel ChatGPT, and a viral chalk message outside Anthropic’s San Francisco office. Axios also pointed to OpenRouter data showing a range of models outpacing OpenAI in usage over the last month, while noting ChatGPT remained close behind Claude in app-store charts.

What comes next
Anthropic’s expected legal challenge, the Pentagon’s effort to replace AI capabilities on classified networks, and the administration’s broader push to accelerate military AI adoption will shape how quickly agencies can maintain continuity in intelligence analysis and defense work.

The episode is also likely to influence future contracting across the AI sector, as companies weigh whether to insist on binding limits for high-risk uses or accept government terms that grant broad discretion, subject only to existing law and internal oversight.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured