Late on Friday, Anthropic filed two sworn declarations with a federal court in California, challenging the Pentagon’s claim that the AI company represents an “unacceptable risk to national security.” Anthropic asserts that the government’s case is based on technical misunderstandings and issues that were not addressed in the preceding negotiations.
These declarations accompanied Anthropic’s reply brief in its lawsuit against the Department of Defense, with a hearing scheduled for this Tuesday, March 24, before Judge Rita Lin in San Francisco.
The conflict dates back to late February when President Trump and Defense Secretary Pete Hegseth publicly announced they were severing ties with Anthropic, following the company’s refusal to permit unrestricted military use of its AI technology.
The declarations were submitted by Sarah Heck, Anthropic’s Head of Policy, and Thiyagu Ramasamy, the Head of Public Sector at the company.
Heck, who has a background as a National Security Council official and has worked at the White House under the Obama administration, later joined Stripe and then Anthropic, where she manages government relations and policy. She attended the February 24 meeting with CEO Dario Amodei, Defense Secretary Hegseth, and the Pentagon’s Under Secretary Emil Michael.
In her declaration, Heck disputes a central claim in the government’s filings: that Anthropic sought an approval role over military operations. She states, “At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role.”
Heck also mentions that concerns about Anthropic potentially disabling or altering its technology during operations were never discussed during negotiations, only appearing in the government’s court filings, leaving Anthropic without a chance to respond.
Techcrunch event
San Francisco, CA
|
October 13-15, 2026
A notable point in Heck’s declaration is an email from Under Secretary Michael to Amodei on March 4, stating the two sides were “very close” on the issues now cited as national security threats, specifically concerning autonomous weapons and mass surveillance of Americans. This email was sent the day after the Pentagon finalized its supply-chain risk designation against Anthropic.
Heck includes this email in her declaration, suggesting it contrasts with Michael’s public comments in subsequent days. On March 5, Amodei issued a statement about “productive conversations” with the Pentagon. The next day, Michael posted on X that “there is no active Department of War negotiation with Anthropic.” A week later, he told CNBC there was “no chance” of renewed talks.
Heck questions why the Pentagon’s official publicly stated the issues were nearly resolved right after the designation if those were the reasons for labeling Anthropic a threat. She implies this could suggest the designation was used strategically, though she does not explicitly make this claim.
Ramasamy, who joined Anthropic in 2025 after six years at Amazon Web Services managing AI deployments for government clients, brings a different perspective. He is credited with assembling the team that integrated Anthropic’s Claude models into national security settings, including a $200 million contract with the Pentagon announced the previous summer.
His declaration addresses the government’s assertion that Anthropic could disrupt military operations by deactivating the technology or altering its behavior. Ramasamy contends this is not feasible. Once Claude is deployed in a government-secured, “air-gapped” system by a third-party contractor, Anthropic cannot access it, as there is no remote kill switch, backdoor, or means to push unauthorized updates. He emphasizes that any change requires the Pentagon’s approval and action.
Ramasamy further states that Anthropic cannot view or extract data from what government users input into the system.
He also challenges the claim that hiring foreign nationals poses a security risk. Ramasamy highlights that Anthropic employees have undergone U.S. government security clearance vetting, similar to the process for accessing classified information. He notes that, to his knowledge, Anthropic is unique in having cleared personnel develop AI models for classified environments.
In its lawsuit, Anthropic argues that the supply-chain risk designation, the first of its kind for an American company, is a form of government retaliation for its public stance on AI safety, which they claim violates the First Amendment.
The government, in a 40-page filing earlier this week, completely dismissed this interpretation, contending that Anthropic’s decision not to allow all legal military uses of its technology was a business decision, not protected speech. They assert the designation was purely a national security decision, not a penalty for the company’s opinions.

