Anthropic won’t budge as Pentagon escalates AI dispute

AI Summary4 min read

TL;DR

The Pentagon threatens to designate Anthropic as a supply chain risk or invoke the Defense Production Act to force access to its AI model, escalating a dispute over ethical guardrails. Anthropic refuses to compromise on its policies against mass surveillance and autonomous weapons, creating a high-stakes standoff with national security implications.

Key Takeaways

  • The Pentagon has given Anthropic until Friday to grant unrestricted military access to its AI model or face potential designation as a 'supply chain risk' or invocation of the Defense Production Act.
  • Anthropic maintains its refusal to allow its technology to be used for mass surveillance of Americans or fully autonomous weapons, despite Pentagon pressure.
  • The Defense Department lacks backup options for classified AI systems, making Anthropic's cooperation critical and potentially explaining the aggressive stance.
  • Invoking the Defense Production Act in this context would represent a significant expansion of the law's modern use and could undermine perceptions of U.S. legal stability for businesses.
  • The dispute reflects broader ideological tensions, with some administration officials criticizing Anthropic's safety policies as 'woke' while experts warn of implications for U.S. business stability.

Anthropic has until Friday evening to either give the U.S. military unrestricted access to its AI model or face the consequences, reports Axios.

Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei in a meeting Tuesday morning that the Pentagon will either declare Anthropic a “supply chain risk” — a designation usually reserved for foreign adversaries — or invoke the Defense Production Act (DPA) to force the company to tailor a version of the model to the military’s needs.

The DPA gives the president the authority to force companies to prioritize or expand production for national defense. It was recently invoked during the COVID-19 pandemic to compel companies like General Motors and 3M to produce ventilators and masks, respectively.

Anthropic has long stated that it doesn’t want its technology used for mass surveillance of Americans or for fully autonomous weapons — and is refusing to compromise on these points.

Pentagon officials have argued the military’s use of technology should be governed by U.S. law and constitutional limits, not by the usage policies of private contractors. 

Using the DPA in a dispute over AI guardrails would mark a significant expansion of the law’s modern use. It would also reflect an expansion of a broader pattern of executive branch instability that has intensified in recent years, according to Dean Ball, senior fellow at the Foundation for American Innovation and former senior policy advisor on AI in Trump’s White House. 

“It would basically be the government saying, ‘If you disagree with us politically, we’re going to try to put you out of business,’” Ball said. 

Techcrunch event

Save up to $300 or 30% to TechCrunch Founder Summit

1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately.

Offer ends March 13.

Save up to $300 or 30% to TechCrunch Founder Summit

1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately

Offer ends March 13.

Boston, MA | June 9, 2026

The dispute unfolds against a backdrop of ideological friction, with some in the administration — including AI czar David Sacks — publicly criticizing Anthropic’s safety policies as “woke.” 

“Any reasonable, responsible investor or corporate manager is going to look at this and think the U.S. is no longer a stable place to do business,” Ball said. “This is attacking the very core of what makes America such an important hub of global commerce. We’ve always had a stable and predictable legal system.”

It’s a serious game of chicken, and Anthropic may not be the one to blink first. According to Reuters, Anthropic doesn’t plan on easing its usage restrictions. 

Anthropic is the only frontier AI lab with classified DOD access, according to several reports. The Department of Defense doesn’t have a backup option currently in play — though the Pentagon has reportedly reached a deal to use xAI’s Grok in classified systems. 

That lack of redundancy may help explain the Pentagon’s aggressive posture, Ball argued. 

“If Anthropic canceled the contract tomorrow, it would be a serious problem for the DOD,” he told TechCrunch, noting the agency appears to be falling short of a National Security Memorandum from the late Biden administration that directs federal agencies to avoid dependence on a single classified-ready frontier AI system. 

“The DOD has no backups. This is a single-vendor situation here,” he continued. “They can’t fix that overnight.”

TechCrunch has reached out to Anthropic and the DOD for comment. 

Visit Website