After Banning Anthropic From Military Use, Pentagon Still Relying Heavily on It in Iran War

Wait 5 sec.

Last week, Anthropic’s CEO Dario Amodei publicly drew a line in the sand with the US military, insisting that its AI models may not be used for mass surveillance of Americans or deadly autonomous weapons.The move infuriated officials at the Pentagon. Defense secretary Pete Hegseth came out in full force, accusing Anthropic of trying to “seize veto power over the operational decisions of the United States military” and banning the company from ever doing any business with any US government entity, “effective immediately.”President Donald Trump ordered agencies to “immediately cease” using Anthropic’s technology on Friday, while simultaneously claiming that the tool will be phased out of all government work over the next six months.But given the government’s extensive use of the company’s chatbot Claude during its deadly offensive in Iran, it’s clearly having trouble making do without it. As The Washington Post reports, the US military is extensively using Palantir’s Maven Smart System in the conflict, which has had Anthropic’s Claude chatbot integrated since 2024.Last week, the Wall Street Journal first reported on the Pentagon’s use of Claude to select attack targets in Iran, hours after the White House announced its ban.According to WaPo‘s sources, the system spits out precise location coordinates for missile strikes and prioritizes them by importance. Maven was also used during the US military’s invasion of Venezuela and the kidnapping of its president, Nicolás Maduro.Center Command is “heavily using” the Maven system, Navy admiral Liam Hulin told WaPo.Military commanders told the newspaper that the military will continue using Anthropic’s tech, regardless of the president ordering them not to, until a viable replacement emerges.“Whether his morals are right or wrong or whatever, we’re not going to let [Anthropic CEO Dario Amodei’s] decision-making cost a single American life,” a source told WaPo.It remains to be seen whether OpenAI will swoop in to fill Anthropic’s place. After Amodei’s falling out with the Pentagon, CEO Sam Altman saw an opportunity to strike last week and signed a contract with the Department of Defense — a move that triggered an enormous and ongoing PR crisis and sent uninstalls of ChatGPT soaring.Whatever the chatbot of choice for military commanders may end up being, the rampant use of AI in war has taken researchers aback. For one, even the most sophisticated chatbots still struggle with the very basics and continue to be haunted by rampant hallucinations. That could have immense implications when it comes to matters of life and death.So far, the offensive in Iran has resulted in the killing of many hundreds of Iranian civilians, as well as six American soldiers.“The key paradigm shift is that AI enables the US military to develop targeting packages at machine speed rather than human speed,” Center for a New American Security executive vice president Paul Scharre told WP. But “AI gets it wrong,” he added. “We need humans to check the output of generative AI when the stakes are life and death.”More on Anthropic: Sam Altman Is Realizing He Made a Gigantic MistakeThe post After Banning Anthropic From Military Use, Pentagon Still Relying Heavily on It in Iran War appeared first on Futurism.