Artificial intelligence firm Anthropic has accused several Chinese companies, including DeepSeek, of illicitly using its Claude AI model to train their own AI systems, according to a report in the New York Times.
In a statement posted on its website, Anthropic alleged that DeepSeek, Moonshot, and MiniMax leveraged a technique known as distillation to capitalize on Claude's unique capabilities.
The companies reportedly used over 24,000 fake accounts to generate more than 16 million conversations with Claude as part of their distillation process, a clear violation of Anthropic's terms of service.
Anthropic is urging the U.S. government and other American AI companies to collaborate on new methods to prevent Chinese firms from exploiting U.S. AI models through distillation. The company argues that Chinese access to these models poses a risk to U.S. national security.
A separate report in the Financial Times indicates that OpenAI has made similar accusations against DeepSeek, alleging the company used similar distillation techniques to train its models via ChatGPT. OpenAI has reportedly briefed the U.S. Senate on the matter, urging action.
DeepSeek's AI model garnered significant attention upon its release last year, with the Chinese company claiming its training costs were substantially lower than those of American models. This claim reportedly led to a sharp decline in the stock prices of several U.S. technology companies.
Dmitri Alperovitch, co-founder of cybersecurity firm CrowdStrike, stated he was unsurprised by these alleged attacks, according to a report by TechCrunch.
"It has been clear for some time that part of the reason for the rapid progress of Chinese AI models is the theft of leading U.S. AI models using distillation," Alperovitch said.
He suggests this alone is sufficient reason to halt the sale of U.S. chips to Chinese companies, to prevent them from increasing their technical efficiency and ability to steal leading AI models.