Anthropic has leveled accusations against three Chinese AI firms, alleging they established over 24,000 fraudulent accounts linked to its Claude AI model with the intent of enhancing their proprietary systems.
These companies—DeepSeek, Moonshot AI, and MiniMax—are accused of conducting over 16 million interactions with Claude via these accounts, employing a method known as “distillation.” Anthropic stated that these laboratories specifically “targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”
These allegations emerge amidst ongoing discussions regarding the stringent enforcement of export controls on advanced AI chips, a strategic policy designed to limit China's progress in artificial intelligence.
Distillation is a widely adopted training technique, typically utilized by AI laboratories to develop more compact and cost-effective versions of their own models. However, it can also be exploited by competitors to essentially replicate the work of other research institutions. Notably, OpenAI recently sent a memo to House lawmakers, accusing DeepSeek of employing distillation to imitate its own offerings.
DeepSeek gained significant attention a year prior with the launch of its open-source R1 reasoning model, which demonstrated performance levels nearly on par with leading American frontier laboratories, yet at a considerably lower cost. DeepSeek is anticipated to imminently release DeepSeek V4, its newest model, which is rumored to surpass both Anthropic’s Claude and OpenAI’s ChatGPT in coding capabilities.
The magnitude of these alleged attacks varied across the companies. Anthropic identified over 150,000 exchanges originating from DeepSeek, which appeared to be focused on enhancing foundational logic and alignment, particularly concerning censorship-safe alternatives for politically sensitive inquiries.
Moonshot AI, meanwhile, engaged in over 3.4 million exchanges, reportedly targeting agentic reasoning and tool use, coding and data analysis, computer-use agent development, and computer vision. Last month, this company introduced a new open-source model, Kimi K2.5, alongside a dedicated coding agent.
MiniMax conducted 13 million exchanges, focusing on agentic coding, tool use, and orchestration. Anthropic reported that it was able to directly observe MiniMax's activities, noting that the firm redirected nearly half of its network traffic to extract capabilities from the newest Claude model upon its release.
Anthropic affirms its commitment to investing in defensive measures designed to complicate the execution and facilitate the identification of distillation attacks. Furthermore, the company advocates for “a coordinated response across the AI industry, cloud providers, and policymakers” to address this issue.
These distillation attacks surface at a juncture when American chip exports to China remain a subject of intense debate. Just last month, the Trump administration officially authorized U.S. companies, including Nvidia, to export advanced AI chips such as the H200 to China. Critics contend that this relaxation of export controls bolsters China’s AI computing capabilities at a pivotal moment in the global competition for AI supremacy.
Anthropic asserts that the extensive scale of data extraction undertaken by DeepSeek, MiniMax, and Moonshot “requires access to advanced chips.”
As stated in Anthropic’s blog, “Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation.”
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, expressed to TechCrunch that he is not surprised by the occurrence of these attacks.
Alperovitch commented, “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact. This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
Anthropic further cautioned that distillation not only poses a threat to undermining American AI leadership but also carries significant national security implications.
According to Anthropic’s blog post, “Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities. Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”
Anthropic highlighted the potential for authoritarian governments to deploy frontier AI for purposes such as “offensive cyber operations, disinformation campaigns, and mass surveillance,” a risk that is exacerbated if these models are made open-source.
TechCrunch has sought comments from DeepSeek, MiniMax, and Moonshot regarding these allegations.
The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.