OpenAI CEO Sam Altman recently admitted that the company’s deal with the Department of Defense was rushed and had raised concerns about its optics. Following failed negotiations between Anthropic and the Pentagon, President Donald Trump instructed federal agencies to discontinue the use of Anthropic’s technology after a six-month transition period. Secretary of Defense Pete Hegseth also designated Anthropic as a supply-chain risk.
In response to this, OpenAI quickly announced its own agreement for the deployment of models in classified environments. While Anthropic had drawn clear boundaries around the use of its technology in autonomous weapons and mass domestic surveillance, OpenAI claimed to have similar restrictions in place. This raised questions about the transparency and integrity of OpenAI’s safeguards compared to Anthropic’s.
To address these concerns, OpenAI executives took to social media to defend the agreement and released a blog post outlining their approach. The post highlighted three areas where OpenAI’s models cannot be utilized: mass domestic surveillance, autonomous weapon systems, and high-stakes automated decisions. OpenAI asserted that their agreement protects these red lines through a multi-layered approach, including full discretion over their safety stack, deployment via cloud, oversight by cleared personnel, and strong contractual protections.
Despite these assurances, critics like Techdirt’s Mike Masnick argued that the deal could still allow for domestic surveillance under Executive Order 12333. OpenAI’s head of national security partnerships, Katrina Mulligan, countered these claims by emphasizing the importance of deployment architecture in preventing misuse of their models in operational hardware.
Altman also acknowledged that the deal had been rushed and faced backlash from the industry. However, he defended the decision by stating that the company believed it could help de-escalate tensions between the Department of War and the industry. Altman expressed confidence that the deal could position OpenAI as a pioneer in promoting industry cooperation, but acknowledged the risks involved.
Overall, the controversy surrounding OpenAI’s agreement with the Department of Defense has sparked discussions about the ethical implications of AI deployment in national security. As the company navigates the complexities of balancing innovation with responsibility, it remains to be seen how this deal will impact its reputation and relationships within the industry.

