Federal Court Battle: Pentagon Defends Decision to Blacklist Anthropic Over AI Safety Rules

Wait 5 sec.

Key TakeawaysOn March 3, the Pentagon designated Anthropic as a security threat after unsuccessful negotiations to remove AI safety guardrails preventing autonomous weapons and domestic surveillance applications.In a March 17 court filing, the Trump administration defended the blacklisting as lawful and argued Anthropic’s First Amendment challenge lacks merit.Defense officials claim Anthropic presents an “unacceptable risk” due to potential ability to disable or modify AI systems during critical military operations.The AI company has launched two legal challenges — one in California federal court and another in a D.C. appeals court — contesting the designation.Tech giant Microsoft, both an Anthropic customer and Pentagon contractor, submitted a supporting brief warning of potential damage to the AI industry.A high-stakes legal battle is unfolding in federal court between the U.S. government and Anthropic, the artificial intelligence company behind Claude, over a Pentagon blacklisting that threatens the firm’s bottom line.The Trump administration vowed a legal fight to oust Anthropic from all US government agencies following a dispute over how the company’s AI technology would be used https://t.co/3mZ5mkDBOq— Bloomberg (@business) March 18, 2026Defense Secretary Pete Hegseth officially labeled Anthropic as a national security supply chain threat on March 3, following the collapse of extended negotiations between Pentagon officials and the AI startup.At the heart of the conflict lies Anthropic’s unwillingness to eliminate safety restrictions governing its AI technology. The company firmly declined to permit its systems to be deployed for autonomous weaponry or domestic surveillance purposes.Pentagon officials deemed these limitations problematic. According to their court submission, maintaining Anthropic’s involvement in military infrastructure would create “unacceptable risk” within defense supply networks.Government attorneys also expressed alarm about Anthropic’s potential capability to “disable its technology or preemptively alter the behavior of its model” while military missions are underway, should the company determine its ethical guidelines were being violated.Administration Frames Issue as Conduct Rather Than SpeechThe Justice Department, representing the Trump administration’s position, rejected Anthropic’s constitutional arguments. Officials characterized the matter as involving contractual obligations and national security imperatives rather than free expression.According to the government’s legal brief, it was Anthropic’s decision to maintain its restrictions — described as “conduct, not protected speech” — that prompted President Trump to order all federal entities to terminate relationships with the company.Anthropic initiated its primary legal action in California federal court on March 9. The complaint characterizes the government’s action as “unprecedented and unlawful,” alleging violations of constitutional protections including free speech and due process.A companion lawsuit was submitted to a Washington, D.C. appeals court, challenging an additional Pentagon designation under separate statutory authority — one that could potentially expand the blacklist across the entire federal bureaucracy.Tech Giant Microsoft Files Brief Supporting AnthropicMicrosoft, which integrates Anthropic’s Claude AI into its products while simultaneously serving as a Pentagon technology supplier, submitted an amicus brief backing Anthropic’s position. The computing giant cautioned that the designation risks undermining the artificial intelligence sector.“This is not the time to put at risk the very AI ecosystem that the administration has helped to champion,” Microsoft wrote.Anthropicindicated it was examining the government’s recent court submission. Company representatives emphasized the litigation was “a necessary step to protect our business, our customers, and our partners.”Anthropichas also challenged assertions that its technology creates security vulnerabilities. Company officials maintain that AI systems have not reached the safety threshold necessary for autonomous weapons deployment and that they oppose mass surveillance on ethical grounds.The White House did not respond to a request for comment.Company leadership has projected the blacklisting could result in billions of dollars in financial damage by 2026. Such designations have historically been applied to entities from adversarial nations, including Chinese telecommunications company Huawei.The post Federal Court Battle: Pentagon Defends Decision to Blacklist Anthropic Over AI Safety Rules appeared first on Blockonomi.