U.S. Military Threatens to Cut Ties with Anthropic AI Firm Over ‘Safety’ Restrictions
The War Department is moving swiftly toward severing business ties with Anthropic and designating the AI company a “supply chain risk.” Such a move would require any U.S. military contractor to drop Anthropic’s technology or be excluded from Pentagon contracts.
War Secretary Pete Hegseth and senior Pentagon officials are nearing a decision after months of bitter negotiations. A top defense official warned that disentangling the relationship would be “an enormous pain in the ss” and that Anthropic “will pay a price for forcing our hand.”
The designation is typically reserved for foreign adversaries, making the penalty unusually severe for a domestic tech firm. Pentagon spokespeople have emphasized that their AI partnerships are under review, stating: “Our nation requires that our partners be willing to help our warfighters win in any fight.”
The friction centers on Anthropic’s refusal to allow the Pentagon to use its Claude AI model for all lawful purposes — specifically limiting its application in mass domestic surveillance and fully autonomous weapon systems. While Anthropic claims these restrictions protect Americans’ privacy and prevent civilian targeting by unchecked AI, the Pentagon argues the limitations are too restrictive and could undermine battlefield effectiveness.
Reports indicate that the Pentagon’s ultimatum followed months of talks with Anthropic and other major AI developers including OpenAI, Google, and xAI. Anthropic’s Claude is currently the only AI model cleared for classified military systems — and has reportedly been used in recent operations, such as the January capture of Venezuela’s Nicolas Maduro.
The company’s insistence on safety guardrails has frustrated Pentagon leaders and sparked public debate. If the Pentagon proceeds with stripping Anthropic of its military status, companies doing business with the War Department would need to certify that they do not use Claude in their workflows — a challenge given the tool’s widespread adoption across the private sector.
According to recent data, eight of the ten largest U.S. firms reportedly use Claude in some capacity. While OpenAI, Google, and xAI have agreed to remove guardrails for military use on unclassified systems and are negotiating classified access, Anthropic remains the most resistant to loosening its policies. Senior administration officials believe the others are more likely to comply with the “all lawful use” standard.
Anthropic officials counter that current U.S. law already prohibits domestic mass surveillance and that AI’s rapid evolution outpaces outdated statutes. They state ongoing discussions with the War Department are conducted in good faith to resolve complex policy issues. A contract worth up to $200 million — representing a small fraction of Anthropic’s $14 billion annual revenue — could be at risk if the Pentagon moves forward with the supply chain risk designation.