Anthropic accuses Chinese AI labs of massive ‘distillation’ attack on Claude

Wait 5 sec.

Anthropic has accused three leading Chinese AI firms of secretly orchestrating a large-scale campaign to illegally replicate the capabilities of its Claude AI model.According to Anthropic, DeepSeek, Moonshot AI, and MiniMax created over 24,000 fake accounts to facilitate more than 16 million interactions with Claude.According to Anthropic, the Chinese laboratories purportedly employed a technique called “distillation” to extract advanced outputs from a frontier model, intending to train and enhance their own models at a reduced cost.MiniMax conducted the largest operation with 13 million exchanges, focusing on agentic coding and tool management. Moonshot AI carried out over 3.4 million exchanges targeting reasoning and computer vision, while DeepSeek produced more than 150,000 exchanges aimed at strengthening logical foundations and bypassing censorship.These accusations emerge amid ongoing debates over U.S. export restrictions on advanced AI chips. Anthropic stated that such large-scale distillation attacks demand significant computing resources, underlining the importance of limiting Chinese access to high-end hardware.The San Francisco-based startup further warned that these activities pose serious national security threats. Unlike U.S. models, which include safeguards against developing bioweapons or cyberattacks, illegally distilled models lack these protections entirely.Anthropic has warned that Chinese authorities might exploit unrestricted AI models for purposes such as mass surveillance, disinformation, and cyberattacks.In response, the company is urging a coordinated effort across the industry, involving policymakers, cloud providers, and competing AI labs, to protect against potential future misuse of these technologies.