Skip to main content
Feb 23

Anthropic Accuses Chinese Rivals of Training AI with Claude

DeepSeek reportedly focused on enhancing its models by leveraging Claude's reasoning abilities and creating "censorship-safe alternatives to political

2 min read98 views3 tags
Originally reported bytheverge

DeepSeek reportedly focused on enhancing its models by leveraging Claude's reasoning abilities and creating "censorship-safe alternatives to politically sensitive questions."

DeepSeek is further accused of having targeted Claude’s advanced reasoning functionalities, concurrently generating "censorship-safe alternatives to politically sensitive questions."

Anthropic has accused DeepSeek and two other Chinese AI firms of illicitly utilizing its Claude AI model to enhance their proprietary offerings. In an announcement issued Monday, Anthropic detailed these "industrial-scale campaigns," which reportedly involved establishing approximately 24,000 fraudulent accounts and conducting over 16 million interactions with Claude, a development initially highlighted by The Wall Street Journal.

The three companies – DeepSeek, MiniMax, and Moonshot – face accusations of "distilling" Claude, a practice involving the training of a smaller AI model using a more advanced one as its foundation. While Anthropic acknowledges that distillation is a "legitimate training method," it cautions that it can "also be used for illicit purposes," particularly to "acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently."

Furthermore, Anthropic states that models illicitly developed through distillation are "unlikely" to retain existing safety protocols. The company warns that "Foreign labs that distill American models can then feed these unprotected capabilities into military, intelligence, and surveillance systems — enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance."

DeepSeek, a company that has garnered significant attention in the AI sector for its potent yet efficient models, reportedly engaged in over 150,000 interactions with Claude, specifically targeting its reasoning capabilities, according to Anthropic. It also faces accusations of employing Claude to generate "censorship-safe alternatives to politically sensitive questions about dissidents, party leaders, or authoritarianism." Last week, in a separate communication to lawmakers, OpenAI leveled similar accusations against DeepSeek, citing "ongoing efforts to free-ride on the capabilities developed by OpenAI and other U.S. frontier labs."

Moonshot conducted over 3.4 million exchanges with Claude, while MiniMax engaged in more than 13 million. Anthropic is now urging other participants in the AI industry, cloud service providers, and legislative bodies to confront the issue of distillation, further suggesting that "restricted chip access" could serve as a mechanism to curb model training and, consequently, "the scale of illicit distillation."

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news