Anthropic has recently identified significant industrial-scale operations by three major AI laboratories—DeepSeek, Moonshot, and MiniMax—to illegally extract advanced capabilities from its flagship model, Claude. These entities engaged in a sophisticated campaign, generating over 16 million exchanges with Claude through approximately 24,000 fraudulent accounts, violating Anthropic’s terms of service and regional access restrictions. This practice, known as 'distillation,' is a legitimate technique used by AI labs to create smaller, cheaper versions of their models for customers. However, in this context, it has been weaponized to illicitly extract powerful capabilities from Claude.
The incident, reported by Anthropic on February 23, 2026, highlights a critical security challenge in the rapidly evolving AI landscape. Distillation, a standard method for model compression, has now become a vector for sophisticated data theft. By training less capable models on the outputs of stronger ones, these labs have been able to produce models that mimic Claude’s capabilities at a fraction of the cost. This unauthorized practice undermines the integrity of AI model development and poses serious risks to the security of advanced AI systems.
Anthropic emphasizes that while distillation is a widely accepted and legitimate practice within the AI industry, its misuse by competitors to steal capabilities represents a significant threat. The company has implemented advanced monitoring systems to detect and prevent such activities, focusing on identifying fraudulent account patterns, anomalous usage patterns, and unauthorized access to high-level model outputs.
Industry experts warn that this incident could set a precedent for how AI model security is approached globally. The ability of competitors to extract and replicate advanced capabilities without permission could lead to a fragmented and unstable AI ecosystem. Without robust safeguards, the integrity of AI development could be compromised, potentially affecting the quality and reliability of AI models across the board.
The implications of this incident are far-reaching. As AI models become more sophisticated and integral to critical applications, the risk of unauthorized model extraction grows. Anthropic’s response underscores the need for stronger safeguards and transparency in the AI development process. The company is working with global partners to enhance its security protocols and ensure that distillation remains a tool for innovation rather than a vector for theft.
Industry stakeholders must consider the broader implications of model extraction. The incident reveals a critical vulnerability in the current AI development framework, where the line between legitimate and illegitimate use of distillation is increasingly blurred. As AI models become more powerful, the potential for misuse must be addressed through proactive measures and collaborative efforts across the industry.
Anthropic’s proactive measures, including real-time monitoring and enhanced security protocols, aim to protect its models from unauthorized use. The company stresses the importance of maintaining trust and transparency in the AI ecosystem as these technologies continue to evolve.