NEWS05 March 2026The conflict in Iran is highlighting the use of artificial intelligence in warfare.ByNicola JonesNicola JonesView author publicationsSearch author on: PubMed Google ScholarMissiles are being guided with the assistance of AI in the Middle East.Credit: Eli Basri/SOPA Images/LightRocket via GettyThe escalating conflict between the United States, Israel and Iran has thrown a spotlight on the use of artificial intelligence in warfare. Just one day before the US–Israeli offensive began on 28 February, the US government sidelined one of its main AI suppliers as part of a disagreement that underlines ethical concerns about AI’s use.And this week, academics and legal experts are meeting in Geneva, Switzerland, to discuss lethal autonomous weapons systems and the procurement of AI in the military, as part of long-running efforts to arrive at an international agreement on the ethical or legal uses of AI in warfare.Rapid technological development is outpacing slow international discussions, says political scientist Michael Horowitz at the University of Pennsylvania in Philadelphia.“The current failure to regulate AI warfare, or to pause its usage until there is some agreement on lawful usage, seems to suggest potential proliferation of AI warfare is imminent,” says Craig Jones, a political geographer at Newcastle University, UK, who researches military targeting.AI on the battlefieldThe US military uses AI based on large language models (LLMs) for logistical and office support, intelligence gathering and analysis, and decision support on the battlefield, says Horowitz. The Maven Smart System, which uses AI for applications including image processing and tactical support, speeds up attack capabilities by suggesting and prioritizing targets, for example. The system has been used in previous conflicts and in the attacks on Iran, according to reports from the Washington Post and other news outlets. “The details are not publicly known,” Horowitz says.How to avoid nuclear war in an era of AI and misinformationIt is possible that AI’s precision targeting could help to reduce civilian casualties during war. However, the ongoing conflicts in Ukraine and Gaza — in which AI is being used to assist target identification and drone navigation, among other things — have seen high civilian death tolls. “There is no evidence that AI lowers civilian deaths or wrongful targeting decisions and it may be that the opposite is true,” says Jones.The possibility of using AI to guide lethal autonomous weaponry without human oversight is a hugely controversial area. Armed forces might appreciate the ability to, say, use AI-powered drones to autonomously identify, find and kill enemy combatants. But existing humanitarian laws require that such weapons be able to distinguish between military and civilian targets. LLM-powered, fully autonomous weapons without human oversight are not currently reliable and not comply with international laws, Horowitz says.Deep concernsFuture potential uses for AI are at the heart of the disagreement between the US Department of War (formerly called the Department of Defense) and Anthropic, an AI company based in San Francisco, California. Since 2024, the Maven system has been supported by Anthropic’s Claude LLM as part of a US$200-million contract with the Department of War.In January, the department issued a memo declaring, among other things, that contracts procuring AI for the government must state that the AI can be put to “any lawful use”, without constraints. But Anthropic refused to remove safeguards, saying that Claude can’t be used for mass domestic surveillance or to guide fully autonomous weapons. On 27 February, US President Donald Trump directed the government to stop using technology from Anthropic. “This whole thing blew up as a theoretical dispute about future, possible use cases,” says Horowitz.Tehran has been subject to missile strikes since 28 February.Credit: Morteza Nikoubazl/NurPhoto via GettySince dropping Anthropic, the government has signed a deal with OpenAI, another AI company based in San Francisco. The company says the contract outlines how its tech will not be used for surveillance or to guide fully autonomous weapons — which the current tech cannot do reliably, the company said. As of 5 March, Anthropic chief executive Dario Amodei is reportedly back in talks with the department.doi: https://doi.org/10.1038/d41586-026-00710-w Lethal AI weapons are here: how can we control them? In the shadow of the war in Ukraine AI and misinformation are supercharging the risk of nuclear war Scientists can help stop a slide to nuclear war — don’t shut them out again The scars of war last for centuries: how we understand collective trauma needs to change Will Europe ramp up defence research? War prompts major rethink The Israel–Hamas conflict one year on: researcher resilience in the face of war How to avoid nuclear war in an era of AI and misinformationSubjectsEthicsMachine learningLawLatest on:EthicsMachine learningLawJobs Deputy Editor, Communications MedicineJob Title: Deputy Editor, Communications Medicine Location: Shanghai, Beijing, Hybrid Working Model Application Deadline: Mar 26th, 2026 Ab...Shanghai, Beijing, Hybrid Working ModelSpringer Nature LtdGlobal Recruitment for Faculty, Postdocs, and Specialists at Hangzhou Institute of Medicine, CASSeeking exceptional Senior/Junior PIs, Postdocs, and Core Specialists globally year-roundHangzhou, ChinaHangzhou Institute of Medicine Chinese Academy of Sciences (HIMCAS)Postdoctoral PositionPostdoctoral fellow positions are availableNew JerseyRobert Wood Johnson Medical School - Department of Neurosurgery