Anthropic to fight US govt in court over ‘supply-chain risk’ label: Behind the standoff, and what it means for Claude AI

Wait 5 sec.

On January 3, 2026, when the US captured Venezuelan President Nicolás Maduro as part of Operation “Absolute Resolve” without sustaining any American casualties, it integrated an unexpected asset into its field operations: the artificial intelligence firm Anthropic’s AI system Claude. This ignited a publicised standoff between the US Department of Defense (DOD) and Anthropic over Claude’s inclusion in military operations.Now that the Pentagon has branded it a “supply-chain risk”, Anthropic’s ability to work with either the DOD or any other institution under contract with the US government has been effectively revoked. Even as the White House announced a six-month phase-out period for all uses of Claude in existing systems, the company has vowed to challenge the Pentagon’s risk designation in court.So, why did Anthropic choose to let itself be cut out of the world’s largest military budget in the first place? We explain.Reports of Claude’s involvement in the January raid prompted Anthropic to ask the Pentagon questions they were not particularly keen to answer, and this escalated to a standoff between US Defense Secretary Pete Hegseth and Anthropic CEO Dario Amodei. The DOD issued a strict ultimatum to Anthropic: drop its AI safeguards by 5:01 pm on February 27 and allow the military to use Claude for “all lawful purposes”, or face severe retaliation. Anthropic’s refusal to cross its own red lines — specifically regarding fully autonomous weapons and mass domestic surveillance — prompted Hegseth to officially designate the firm as a “supply-chain risk” under Section 3252 of Title 10, United States Code (principal body of military law governing the US armed forces). The section outlines requirements for information relating to supply chain risk.More in Explained | Explained: The jobs that AI could most certainly replace, as per an Anthropic studyEven as OpenAI swiftly swooped in to take over the Pentagon contract, Amodei wrote a scathing internal memo to his employees that was subsequently leaked to the press. Terming OpenAI’s safety guardrails of being “maybe 20% real and 80% safety theatre”, he argued that OpenAI was merely placating the government while Anthropic cared about preventing abuse. He also suggested that the Trump administration retaliated because Anthropic had failed to donate to the president’s campaign. Although he apologised for the memo in an interview with The Economist, this episode has not deterred Anthropic from a possible legal challenge.Making the Pentagon blacklistAnthropic’s decision to reject the Pentagon is rooted in its recent market valuation and the economic importance (or lack thereof) of its $200 million contract with the DOD. Last month, Anthropic’s Series G (7th round) funding raised approximately $30 billion, valuing the firm at $380 billion. According to AI investments facilitator MLQ.ai, this latest funding released Anthropic from the leverage the US government had over it.With an estimated $14 billion in annual recurring revenue, Anthropic — unlike defence majors Raytheon or Northrop Grumman — does not have the US government as its dominant single buyer. And its ability to fight the “supply-chain risk” designation in federal court for years is untouched.Story continues below this adSecondly, the company’s adoption of a global business-to-business (B2B) model draws completely different boundaries for it compared with OpenAI or Google, which directly interact with consumers instead. In the B2B market, a predictable regulatory framework is critical, with customer enterprises meticulously drafting plans years in advance. They expect the software they invest in to remain clear of compliance breaches and government fines.Also read | Big tech group supports Anthropic in Pentagon fight as investors push to de-escalate clash over AI safeguardsAfter its latest round of funding, Anthropic confirmed that eight of the top 10 companies (by revenue) on the Fortune 500 list use Claude. These companies legally bind Claude to strict international frameworks, especially the European Union’s Artificial Intelligence Act of 2024. The AI Act draws red lines against mass surveillance and using biometric data to categorise individuals, besides mandating rigorous, case-by-case proportionality tests that force authorities to weigh the scale of harm against individual rights. OpenAI CEO Sam Altman during the Express Adda at New Delhi on February 20, 2026. Photo: Abhinav SahaThe Pentagon’s demand for Claude to be used for “all lawful purposes” attempted to bypass these constraints, illustrating how US military doctrine and Anthropic’s business model are incompatible. For Anthropic, this creates a glaring commercial risk: if the company were to compromise its algorithms to grant the US military unrestricted operational freedom, it would destroy the guardrails that ensure compliance with the EU’s stringent guidelines and compromise the use of its product offerings by its clients, In its defiance, Anthropic was protecting its international market share.Also, corporate clients are fearful of intellectual property leakages and internal company data being compromised, as a recent Gartner report suggests. Incidentally, the Pentagon’s blacklist serves as an assurance to global enterprises that Anthropic’s core ideology remains steadfast even in the face of State intrusion.‘All lawful purposes’Story continues below this adAs to why building two separate models — one for enterprise customers as per EU regulations and another for the Pentagon — remains unfeasible, one must understand the architecture of AI models. The US military’s mandate for “all lawful purposes” (which also includes the use of autonomous lethal targeting) cannot be accommodated with a simple software patch. To avoid compromising its commercial enterprise product, Anthropic would be forced to maintain a separate, unconstrained model for the military.Also read | For OpenAI and Anthropic, the competition is deeply personalMaintaining parallel models also presents a grave cybersecurity risk. Research has shown that AI models inevitably memorise and leak training data. If Anthropic attempted to save compute costs by having the military and commercial models share any foundational architecture, the risk that classified data from the military model could bleed into the other would be so massive that no insurance agency would cover it, leaving Anthropic liable for billions of dollars in damages out of its own pocket.OpenAI swoops inFinally, the unpredictability of Anthropic’s human capital — the intangible economic value of its workforce’s skills and knowledge — cannot be ignored. Instances such as engineer revolts forcing Google to abandon Project Maven, which was dedicated to developing the Pentagon’s drone capabilities, in 2018 have highlighted that the ideological rift within Silicon Valley is the industry’s ultimate chokepoint.Considering Anthropic was founded by individuals defecting from OpenAI with the agenda of safety over commercialisation of AI, there is always a risk that creating an unbridled military system could trigger an exodus of scarce engineering talent required to maintain the $380-billion enterprise business.Story continues below this adWith Anthropic’s exit leaving a vacuum, rivals like OpenAI aggressively moved in. Where Anthropic CEO Dario Amodei drew a hard line at integration with lethal use, OpenAI CEO Sam Altman publicly tweeted his support for equipping the US and its allies. Altman’s statement of being “terrified of a world where AI companies act like they have more power than the government” was perceived as an indication that OpenAI agreed to provide the DOD with the customised AI architecture that Anthropic refused to.Moreover, while both companies proclaim their refusal to participate in domestic mass surveillance, foreign surveillance is a different matter. The operation against Maduro did not seemingly cross Anthropic’s red line, even though Claude’s integration into a working military kill chain — coupled with the Pentagon’s demand for untargeted surveillance capabilities — would instantly trigger violations under the EU’s AI Act.By allowing OpenAI to monopolise the Pentagon’s AI contracts — and absorb the massive regulatory, reputational, and international liabilities that come with being the US military’s official AI — Anthropic appears to have cemented itself as the strictly neutral, sovereign architecture globally, making Claude the default, risk-free choice for the rest of the world’s enterprise economy.