In a recent development, hundreds of tech workers have come together to sign an open letter urging the Department of Defense to reconsider its decision to label Anthropic as a “supply chain risk.” The letter, which has garnered support from major technology and venture capital firms such as OpenAI, Slack, IBM, Cursor, and Salesforce Ventures, also calls on Congress to intervene and assess the appropriateness of the DOD’s actions against the American technology company.
The dispute between the DOD and Anthropic arose when the AI lab refused to grant the military unrestricted access to its AI systems. Anthropic drew a line in the sand, stating that it would not allow its technology to be used for mass surveillance on American citizens or to power autonomous weapons that could make targeting and firing decisions without human intervention. The DOD, while claiming it had no intentions of using Anthropic’s technology for such purposes, argued that it should not be constrained by a vendor’s rules.
Following Anthropic CEO Dario Amodei’s refusal to comply with the DOD’s demands, President Donald Trump directed federal agencies to cease using Anthropic’s technology after a six-month transition period. The DOD’s Peter Hegseth threatened to designate Anthropic as a supply chain risk, which would effectively blacklist the AI firm from working with any agency or company associated with the Pentagon.
Despite Hegseth’s assertions, a mere post on X does not automatically confer a supply chain risk designation upon Anthropic. The government is required to conduct a thorough risk assessment and notify Congress before military partners are mandated to sever ties with Anthropic. In response, Anthropic vowed to challenge any supply chain risk designation in court, deeming the decision both legally unsound and unwarranted.
Critics within the industry view the government’s treatment of Anthropic as unduly harsh and retaliatory. The open letter highlights concerns about the precedent set by punishing an American company for refusing to accept unfavorable contract terms, warning other technology firms of potential retaliation if they resist government demands.
Additionally, the industry remains apprehensive about government overreach and the misuse of AI technology for nefarious purposes. Boaz Barak, a researcher at OpenAI, emphasized the importance of preventing governments from utilizing AI for mass surveillance, urging the industry to unite against such abuses.
In the wake of these events, OpenAI announced a deal to deploy its models in the DOD’s classified environments, reaffirming its commitment to the same red lines as Anthropic. OpenAI CEO Sam Altman emphasized the need for the AI industry to address the risks associated with government misuse of AI and surveillance practices, calling for comprehensive evaluations and mitigation strategies to safeguard against potential abuses.
As the tech industry grapples with the implications of these developments, it is clear that the debate surrounding government access to AI technology and the protection of individual privacy rights will continue to shape the future of AI innovation and regulation.

