The tech firm has resisted the War Department’s demand to lift limits on surveillance and autonomous weapons President Donald Trump has ordered federal agencies to halt the use of artificial intelligence systems developed by Anthropic, escalating an unprecedented confrontation between the US government and one of Silicon Valley’s most influential AI firms over military use of advanced algorithms.In a post published Friday on Truth Social, Trump accused the company of attempting to “dictate how our great military fights and wins wars,” announcing an immediate government-wide blacklist alongside a six-month phase-out period for agencies currently relying on its technology.“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!,” Trump wrote, about an hour before the Pentagon’s deadline for the company to accept demands to lift key safeguards or risk being blacklisted as a “supply chain risk.” Read more The Pentagon vs Anthropic: Why a tech giant is defying the US military on use of AI The US president also warned the company to cooperate during the phase-out period or face potential legal concequences. “Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow,” Trump wrote.The directive follows weeks of mounting tensions between Anthropic and the United States Department of War over restrictions embedded in the company’s flagship AI system, Claude.At the center of the dispute are contractual safeguards that prohibit Anthropic’s models from being used for mass domestic surveillance or fully autonomous weapons systems. Pentagon officials have demanded those limits be removed, arguing military commanders cannot operate under constraints imposed by private contractors during wartime or crisis situations. Read more The Pentagon is looking to acquire killer AI. Should we be worried? Anthropic chief executive Dario Amodei rejected the request, stating the company would not support uses it believes conflict with democratic norms or pose unacceptable risks.Trump’s order effectively formalizes a threat previously raised by Secretary of War Pete Hegseth, who warned the company could be designated a “supply chain risk” – a classification typically reserved for firms linked to foreign adversaries.Anthropic became the first commercial AI developer to deploy large language models on classified Pentagon networks under a contract valued at up to $200 million. Its Claude chatbot has been used across intelligence analysis, operational simulations, cyber operations and planning workflows, according to defense officials. US media reports indicated the system also played a key role in planning and conducting the raid targeting Venezuelan president Nicolas Maduro.