5 min readFeb 26, 2026 01:01 PM IST First published on: Feb 26, 2026 at 12:59 PM ISTBy Shobhankita ReddyAnthropic has accused three Chinese AI laboratories — DeepSeek, MoonShot, and MiniMax — of launching “industrial-scale campaigns” aimed at reverse engineering its frontier capabilities in agentic reasoning, tool use, and coding. While the company does not offer commercial access to its model, Claude, in China or to subsidiaries of Chinese companies anywhere else, it accuses these companies of using proxy services to circumvent access restrictions and of deploying coordinated, targeted prompts that collectively amount to 16 million exchanges aimed at retrieving data about its models at scale.AdvertisementNotwithstanding the justifiable case of violating the company’s terms of service, this episode highlights an uncomfortable double standard. Western AI firms crawled the web to train their models, terming this as “standard industry practice” at a time when consent was still ambiguous and before there was any understanding of the infrastructure supporting opt-out mechanisms.Anthropic’s own track record in this regard is dubious. Only in June last year was the company sued by Reddit for unauthorised scraping of its content and allegedly bypassing paid, legitimate channels for data licensing agreements that Reddit has with Google and OpenAI. In September 2025, Anthropic agreed to a $1.5 billion settlement, among the largest in copyright history to date, to resolve a lawsuit alleging that it had trained Claude using pirated books from datasets like LibGen. Crying foul today when something similar is being done to them is not so much a principled defence of fairness as a reflection of shifting norms to secure their technological advantage.Additionally, Anthropic’s allegations against China carry national security undertones. It claims that illicit distillation — training a less capable model on the outputs of a stronger one — lacks necessary safety guardrails that prevent state and non-state actors from using AI for offensive cyber operations or developing weapons capable of mass destruction. The company further claims that open-sourcing such models only multiplies this risk.AdvertisementAlso Read | Beyond the AI hype: There are hidden dangers of bias in automated decisionsWhile some of these risks are real, as indicated by reports that DeepSeek has allegedly been used by the Chinese military, on the part of Anthropic, this is posturing at best, especially given that the company is resisting the demands of the US military.Anthropic’s ongoing tussle with the Pentagon reveals the fragile balancing act the company has been trying to maintain as a defence supplier in the US while establishing its red lines. While defence deals are lucrative for their long-term revenue stability and political capital, Anthropic’s pursuit of military contracts also seems motivated by the reputational moat it stands to gain from being the military’s trusted partner. It can leverage this in its dealings with customers in highly regulated sectors like healthcare and financial services.The company won a $200 million contract with the Department of Defense last year and is officially part of the US defence supply chain, but its stance on safeguards against fully autonomous weapons targeting and US domestic surveillance has put this relationship at loggerheads. The Pentagon has pushed back against the company’s stated red lines and held that commercial AI should be available for “all lawful purposes”. It has further threatened to designate the company as a “supply chain risk”, a label that could have dire consequences for its commercial prospects. Further, recent reports reveal that the US operation to abduct Venezuelan President Maduro used Claude.More importantly, the episode highlights the inevitability of AI diffusion. If a model’s outputs enable replication, even if replication is only partial in scope, through capability inferences and observable behaviour, the question then is not about legal excludability so much as how long any technology leadership can last. This is counter to the Anthropic CEO’s view. Calling AI a technology in its adolescent phase and capable of massive harm, he has compared allowing the sale of advanced AI chips to China to selling nuclear weapons to North Korea.you may likeThe nuclear analogy assumes that tech denial regimes focused narrowly on a few inputs can meaningfully cap global risk. However, as the company’s own experience shows, preventing AI proliferation is different from nuclear proliferation and will only grow more difficult in a world where knowledge circulates through open research ecosystems, talent mobility, and rapid technology adoption driven by economic incentives. Consequently, the risk profile is likely to differ materially from what Anthropic’s claims indicate.Still, episodes like this coming to the fore indicate that Chinese capabilities, despite their open-source credibility and widespread adoption, remain in a position of technological catch-up relative to their closed-source US peers. In the US, this weakness on the Chinese side will be used to push for export restrictions on China. India has typically been inadvertently caught in the middle of these bilateral frictions, as seen in the rescinded AI diffusion rules. These developments are, therefore, worth tracking as signals of a changing technology geopolitics landscape for the rest of us.The writer is a researcher in technology geopolitics at the Takshashila Institution, Bangalore. Views are personal