Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ

AI Summary4 min read

TL;DR

Despite President Trump's order to phase out Anthropic's AI, the U.S. military used its Claude platform in Iran strikes, highlighting operational lags. The dispute centers on AI safeguards and Pentagon contracts, with OpenAI stepping in and concerns over government precedent.

Key Takeaways

  • The U.S. military used Anthropic's Claude AI for intelligence and simulations in Iran strikes, even after Trump ordered a phase-out, showing delays in implementing policy changes.
  • Anthropic refused to remove AI safeguards for mass surveillance or autonomous weapons, leading to a Pentagon designation as a 'supply-chain risk' and a legal challenge.
  • OpenAI quickly filled the gap with a Pentagon deal, claiming similar guardrails, while its CEO criticized the government's handling as a 'scary precedent'.
  • Experts note that integrating AI into defense systems involves sunk costs and procedural lags, making rapid changes difficult and emphasizing the need for model portability.
  • The conflict raises broader questions about AI companies maintaining ethical guardrails under 'any lawful use' contracts in defense applications.

Tags

Artificial Intelligenceiranartificial intelligenceAIwarwhite houseAnthropicClaude
Anthropic's Claude. Image: Decrypt/Shutterstock

Hours after President Donald Trump ordered federal agencies to halt use of Anthropic’s AI tools, the U.S. military carried out a major airstrike on Iran that reportedly relied on the company’s Claude platform.

U.S. Central Command used Claude for intelligence assessments, target identification, and simulating battle scenarios during the Iran strikes, people familiar with the matter confirmed to the Wall Street Journal on Saturday

It came despite Trump’s directive on Friday that agencies begin a six-month phase-out of Anthropic products following a breakdown in negotiations between the company and the Pentagon over how the latter can use commercially developed AI systems.

Decrypt has reached out to the Department of Defense and Anthropic for comment.

“When AI tools are already embedded in live intelligence and simulation systems, decisions at the top don’t instantly translate to changes on the ground,” Midhun Krishna M, co-founder and CEO of LLM cost tracker TknOps.io, told Decrypt. “There’s a lag—technical, procedural, and human.”

“By the time a model is embedded across classified intelligence and simulation systems, you’re looking at sunk integration costs, retraining, security re-certifications, and parallel testing, so a six-month phase-out may sound decisive, but the real financial and operational burden runs far deeper,” Krishna added.

“Defense agencies will now prioritize model portability and redundancy,” he said. “No serious military operator wants to discover during a crisis that its AI layer is politically fragile.”

Anthropic CEO Dario Amodei said Thursday the company would not strip safeguards preventing Claude from being deployed for mass domestic surveillance or fully autonomous weapons. 

"We cannot in good conscience accede to their request," Amodei wrote, after the Defense Department demanded contractors allow their systems for "any lawful use."

"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump later wrote on Truth Social, ordering agencies to "immediately cease" all use of Anthropic products. 

Defense Secretary Pete Hegseth followed, designating Anthropic a "supply-chain risk to national security,” a label previously reserved for foreign adversaries, barring every Pentagon contractor and partner from commercial activity with the company. 

Anthropic called the designation "unprecedented" and vowed to challenge it in court, saying it had "never before publicly applied to an American company." 

The company added that, to its knowledge, the two disputed restrictions had not affected a single government mission to date.

“The debate isn’t about whether AI will be used in defense, that’s already happening,” Krishna added. “It is whether frontier labs can maintain differentiated guardrails once their systems become operational assets under ‘any lawful use’ contracts.”

OpenAI moved quickly to fill the gap with CEO Sam Altman announcing a Pentagon deal on Friday night covering classified military networks, claiming it included the same guardrails Anthropic had sought. 

Yesterday we reached an agreement with the Department of War for deploying advanced AI systems in classified environments, which we requested they make available to all AI companies.

We think our deployment has more guardrails than any previous agreement for classified AI…

— OpenAI (@OpenAI) February 28, 2026

Asked whether the Pentagon’s effective blacklisting of Anthropic set a troubling precedent for future disputes with AI firms, OpenAI CEO Sam Altman responded on X, “Yes; I think it is an extremely scary precedent, and I wish they handled it a different way.

“I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution,” he added.

Meanwhile, nearly 500 employees from OpenAI and Google signed an open letter warning that the Pentagon was attempting to pit AI companies against each other. 

Visit Website