OpenAI reveals more details about its agreement with the Pentagon

AI Summary4 min read

TL;DR

OpenAI defended its Pentagon deal after Anthropic's negotiations failed, outlining safeguards against mass surveillance and autonomous weapons. Critics questioned if the agreement truly prevents domestic surveillance, while OpenAI emphasized deployment architecture over contract language.

Key Takeaways

  • OpenAI's Pentagon deal was rushed but includes safeguards against mass domestic surveillance, autonomous weapons, and high-stakes automated decisions.
  • The company claims its multi-layered approach with cloud deployment and personnel oversight offers stronger protections than competitors' usage policies.
  • Critics argue the deal's compliance with Executive Order 12333 could still allow domestic surveillance through technical loopholes.
  • OpenAI executives defended the agreement as a de-escalation effort despite backlash that briefly helped Anthropic's Claude overtake ChatGPT.
  • The company maintains deployment architecture matters more than contract language in preventing misuse.

Tags

OpenAIPentagonAI ethicsnational securitysurveillance

By CEO Sam Altman’s own admission, OpenAI’s deal with the Department of Defense was “definitely rushed,” and “the optics don’t look good.”

After negotiations between Anthropic and the Pentagon fell through on Friday, President Donald Trump directed federal agencies to stop using Anthropic’s technology after a six-month transition period, and Secretary of Defense Pete Hegseth said he was designating the AI company as a supply-chain risk.

Then, OpenAI quickly announced that it had reached a deal of its own for models to be deployed in classified environments. With Anthropic saying it was drawing red lines around the use of its technology in fully autonomous weapons or mass domestic surveillance, and Altman saying OpenAI had the same red lines, there were some obvious questions: Was OpenAI being honest about its safeguards? Why was it able to reach a deal while Anthropic was not?

So as OpenAI executives defended the agreement on social media, the company also published a blog post outlining its approach.

In fact, the post pointed to three areas where it said OpenAI’s models cannot be used — mass domestic surveillance, autonomous weapon systems, and “high-stakes automated decisions (e.g. systems such as ‘social credit’).”

The company said that in contrast to other AI companies that have “reduced or removed their safety guardrails and relied primarily on usage policies as their primary safeguards in national security deployments,” OpenAI’s agreement protects its red lines “through a more expansive, multi-layered approach.”

“We retain full discretion over our safety stack, we deploy via cloud, cleared OpenAI personnel are in the loop, and we have strong contractual protections,” the blog said. “This is all in addition to the strong existing protections in U.S. law.”

Techcrunch event

Disrupt 2026: The tech ecosystem, all in one room

Your next round. Your next hire. Your next breakout opportunity. Find it at TechCrunch Disrupt 2026, where 10,000+ founders, investors, and tech leaders gather for three days of 250+ tactical sessions, powerful introductions, and market-defining innovation. Register now to save up to $400.

Save up to $300 or 30% to TechCrunch Founder Summit

1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately

Offer ends March 13.

San Francisco, CA | October 13-15, 2026

The company added, “We don’t know why Anthropic could not reach this deal, and we hope that they and more labs will consider it.”

After the post was published, Techdirt’s Mike Masnick claimed that the deal “absolutely does allow for domestic surveillance,” because it says the collection of private data will comply with Executive Order 12333 (along with a number of other laws). Masnick described that order as “how the NSA hides its domestic surveillance by capturing communications by tapping into lines *outside the US* even if it contains info from/on US persons.”

In a LinkedIn post, OpenAI’s head of national security partnerships Katrina Mulligan argued that much of the discussion around the contract language assumes “the only thing standing between Americans and the use of AI for mass domestic surveillance and autonomous weapons is a single usage policy provision in a single contract with the Department of War.”

“That’s not how any of this works,” Mulligan said, adding, “Deployment architecture matters more than contract language […] By limiting our deployment to cloud API, we can ensure that our models cannot be integrated directly into weapons systems, sensors, or other operational hardware.”

Altman also fielded questions about the deal on X, where he admitted it had been rushed and resulted in significant backlash against OpenAI (to the extent that Anthropic’s Claude overtook OpenAI’s ChatGPT in Apple’s App Store on Saturday). So why do it?

“We really wanted to de-escalate things, and we thought the deal on offer was good,” Altman said. “If we are right and this does lead to a de-escalation between the DoW and the industry, we will look like geniuses, and a company that took on a lot of pain to do things to help the industry. If not, we will continue to be characterized as […] rushed and uncareful.”

Visit Website