Anthropic Accuses Chinese Labs of AI Model Theft Amid Chip Debate

Anthropic has leveled serious allegations against three Chinese AI companies, claiming they deceived the system by creating over 24,000 fake accounts with its Claude AI model. The labs, identified as DeepSeek, Moonshot AI, and MiniMax, are accused of generating more than 16 million exchanges with Claude, utilizing a technique known as “distillation” to enhance their own AI models.

This revelation arrives at a crucial time as the United States grapples with how to enforce export controls on advanced AI chips, a policy designed to restrict China’s AI development. Distillation is a common method employed by AI laboratories to train their models, enabling them to create smaller and more cost-effective versions. However, competitors can exploit this process to replicate the work of other labs, essentially copying their innovations.

Details of the Allegations

In a recent blog post, Anthropic stated that the Chinese labs specifically targeted Claude’s unique capabilities, including agentic reasoning, tool use, and coding. The company noted that it had tracked over 150,000 exchanges from DeepSeek aimed at improving foundational logic and alignment, particularly regarding censorship-safe alternatives for sensitive policy inquiries. Meanwhile, Moonshot AI reportedly engaged in over 3.4 million exchanges, focusing on agentic reasoning, tool use, coding, and data analysis.

DeepSeek gained recognition last year for its open-source R1 reasoning model, which closely matched the performance of American frontier labs while being significantly cheaper. The company is on the verge of releasing DeepSeek V4, a model that is anticipated to outperform both Claude and OpenAI’s ChatGPT in coding tasks. Anthropic claims that MiniMax directed nearly half of its traffic to siphon capabilities from Claude’s latest model upon its launch, generating a staggering 13 million exchanges focused on agentic coding and orchestration.

Impact on AI Development and Security

Anthropic has voiced a commitment to enhancing its defenses against such distillation attacks while urging a coordinated response from the AI industry, cloud providers, and policymakers. The company contends that the scale of extraction conducted by DeepSeek, MiniMax, and Moonshot necessitates access to advanced chips, which reinforces the argument for stringent export controls.

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, stated, “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact.” He argues that these revelations provide further justification for restricting the sale of AI chips to these companies.

According to Anthropic, the implications of distillation attacks extend beyond competitive advantage, potentially posing national security risks. The company highlights that systems built by U.S. firms are designed to prevent both state and non-state actors from leveraging AI for harmful purposes, such as developing bioweapons or executing cyber-attacks. Models generated through illicit distillation are unlikely to retain these protective measures, increasing the risk of dangerous capabilities proliferating without adequate safeguards.

As the debate surrounding American chip exports to China continues, the implications of such allegations may influence future policy decisions. The Trump administration recently permitted U.S. companies, including Nvidia, to export advanced AI chips like the H200 to China, a move that critics assert could bolster China’s AI computing capacity at a pivotal moment in the global competition for AI dominance.

In summary, the accusations from Anthropic underscore significant concerns about intellectual property theft and the potential risks associated with unregulated AI development. As these discussions unfold, the outcomes may shape the future landscape of AI technology and international relations.