Accusations of Distillation Attacks by Chinese AI Companies Surface
The AI company Anthropic has raised allegations against three Chinese AI companies, accusing them of using more than 24,000 fake accounts to manipulate Anthropic’s Claude AI model for their own benefit. The companies in question are DeepSeek, Moonshot AI, and MiniMax, who allegedly engaged in over 16 million exchanges with Claude through these fake accounts, employing a technique known as “distillation.” Anthropic claims that the labs targeted Claude’s unique capabilities in agentic reasoning, tool use, and coding.
These accusations come at a time when there are ongoing discussions regarding the enforcement of export controls on advanced AI chips, with the aim of restricting China’s AI development. Distillation is a common method used by AI labs to train their models, but it can also be exploited by competitors to replicate the work of other labs. OpenAI recently accused DeepSeek of using distillation to imitate its products, further fueling the debate.
DeepSeek gained attention last year with the release of its R1 reasoning model, which rivaled top American labs in performance at a fraction of the cost. The company is now gearing up to launch DeepSeek V4, its latest model, reportedly capable of outperforming both Anthropic’s Claude and OpenAI’s ChatGPT in coding.
The magnitude of the attacks varied among the three companies. DeepSeek focused on improving foundational logic and alignment, Moonshot AI targeted agentic reasoning, tool use, coding, data analysis, computer vision, and recently unveiled the Kimi K2.5 model and coding agent. MiniMax directed its efforts towards agentic coding, tool use, and orchestration, with Anthropic observing the company redirecting traffic to siphon capabilities from the latest Claude model.
Anthropic is committed to enhancing defenses against distillation attacks and is calling for a coordinated response from the AI industry, cloud providers, and policymakers. The company emphasizes the importance of export controls in limiting illicit distillation and preventing unauthorized access to advanced chips, which are crucial for such attacks.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank, expressed concern over the theft of U.S. frontier models through distillation by Chinese companies. He emphasized the need to restrict the sale of AI chips to prevent further advantage to these companies. Anthropic also highlighted the national security risks posed by distillation attacks, as models created through such means may lack essential safeguards against malicious activities.
Authoritarian governments utilizing AI for offensive cyber operations, disinformation campaigns, and mass surveillance could exacerbate the risks if these models are openly shared. JS has reached out to the accused companies for their comments on the allegations.

