The CEO of OpenAI, Sam Altman, made a significant announcement on Friday regarding a new agreement with the Department of Defense. This agreement allows the DoD to utilize OpenAI’s AI models within its classified network. This development comes after a tense standoff between the DoD, previously known as the Department of War under the Trump administration, and OpenAI’s competitor, Anthropic.
The DoD had been pressuring AI companies, including Anthropic, to permit the use of their models for “all lawful purposes.” However, Anthropic was adamant about setting boundaries, particularly concerning mass domestic surveillance and fully autonomous weapons. In a detailed statement released by Anthropic’s CEO, Dario Amodei, he clarified that while they did not object to specific military operations, they believed that in certain cases, AI could potentially undermine democratic values.
As tensions escalated, more than 60 employees from OpenAI and 300 from Google signed an open letter in support of Anthropic’s stance. President Trump even criticized Anthropic on social media, leading to federal agencies phasing out the use of the company’s products over six months.
Secretary of Defense, Pete Hegseth, accused Anthropic of trying to exert control over the operational decisions of the U.S. military and designated them as a supply-chain risk. In response, Anthropic vowed to challenge this designation in court.
Altman’s announcement revealed that OpenAI’s defense contract includes safeguards addressing the same concerns raised by Anthropic. He emphasized principles such as prohibiting domestic mass surveillance and ensuring human responsibility for the use of force, including autonomous weapon systems. Altman stated that OpenAI would implement technical safeguards and deploy engineers to work with the Pentagon to ensure the safety of their models.
Altman expressed a desire for the DoD to extend these terms to all AI companies, promoting de-escalation towards reasonable agreements. He also disclosed plans for OpenAI to build a “safety stack” to prevent misuse and emphasized that the government would not compel them to perform tasks if the model refused.
In a surprising turn of events, news emerged that the U.S. and Israeli governments had initiated airstrikes on Iran, with Trump calling for the overthrow of the Iranian government. Amidst these developments, the AI industry finds itself at a critical juncture where principles of ethics, safety, and responsibility are being tested.

