Anthropic, a San Francisco-based company, made waves in the artificial intelligence industry by accusing three prominent Chinese AI laboratories — DeepSeek, Moonshot AI, and MiniMax — of engaging in coordinated campaigns to siphon capabilities from its Claude models. The labs allegedly used tens of thousands of fraudulent accounts to conduct over 16 million exchanges with Claude in violation of Anthropic’s terms of service and regional access restrictions. This revelation sheds light on a practice known as distillation, where foreign competitors use AI models to leapfrog years of research and investment.
Distillation is a process of extracting knowledge from a larger AI model, known as the “teacher,” to create a smaller and more efficient one, known as the “student.” While distillation is a legitimate training method, it can also be exploited by competitors to capture capabilities developed by others. The use of distillation came to the forefront when DeepSeek released its R1 reasoning model, which matched or approached the performance of leading American models at a fraction of the cost. This sparked a wave of replication and experimentation in the AI community.
Anthropic’s investigation revealed that DeepSeek, Moonshot AI, and MiniMax conducted sophisticated operations to extract capabilities from Claude, focusing on areas such as agentic reasoning, tool use, and coding. DeepSeek, in particular, employed techniques such as rubric-based grading tasks and generating censorship-safe alternatives to policy-sensitive queries. Moonshot AI targeted agentic reasoning, tool use, and computer vision, while MiniMax focused on agentic coding and tool use.
These labs accessed Anthropic’s models through commercial proxy services that resell access to frontier AI models. These services operate using “hydra cluster” architectures, distributing traffic across multiple accounts to evade detection. Anthropic’s national security argument emphasizes the risks posed by illicitly distilled models, which lack necessary safeguards and could be used for offensive cyber operations, disinformation campaigns, and mass surveillance.
While the legal landscape around AI distillation remains unclear, Anthropic’s approach highlights the national security implications of these attacks. The company has implemented defensive measures to identify and prevent distillation attacks, but emphasizes the need for industry-wide collaboration. The disclosure is expected to influence policy debates on chip export controls and government device bans related to DeepSeek.
For AI industry decision-makers, the implications are clear: API security is now as critical as model development. Anthropic’s detailed accusations may prompt a coordinated response to address the growing threat of distillation attacks. Whether this leads to enhanced security measures or escalates an arms race between attackers and defenders remains to be seen.

