The moves come as AI systems are seen rapidly becoming embedded in modern warfare — from intelligence processing to battlefield planning. (NYT)Amid reports that artificial intelligence (AI) tools were being used by the United States military in its ongoing war on Iran, top AI companies like OpenAI and Anthropic are looking to hire experts that can help suggest guardrails when their software is being used in situations like military conflicts.Anthropic, for instance, is looking to recruit a chemical weapons and high-yield explosives expert to try to prevent “catastrophic misuse” of its software. OpenAI is hiring a researcher in “biological and chemical risks”.The moves come as AI systems are seen rapidly becoming embedded in modern warfare — from intelligence processing to battlefield planning.The growing use of AI in warfare has come under intense scrutiny after reports emerged that Anthropic’s AI model Claude was used by the US military during operations against Iran, even as Washington and the company remain locked in a bitter dispute over the technology’s military applications.The Anthropic vs US Pentagon tussleClaude, a large language model developed by the AI startup Anthropic, has been widely deployed across US national security agencies for tasks such as intelligence analysis, operational planning and cyber operations. United States Department of Defense systems have used the technology in modelling battle scenarios and analysing intelligence data.Also in Explained | Anthropic to fight US govt in court over ‘supply-chain risk’ label: Behind the standoff, and what it means for Claude AIHowever, tensions escalated earlier this year after the Pentagon designated Anthropic as a “supply chain risk”, effectively ordering federal agencies to phase out the company’s technology within six months. The decision followed disagreements over how the military could use Claude, with Anthropic insisting on safeguards preventing the AI from being used for mass domestic surveillance or for developing fully autonomous weapons systems.Despite the order, multiple reports suggest that Claude continued to play a role in the US military campaign in Iran. The AI system is believed to have been used for tasks such as target identification, intelligence assessment and simulating possible battlefield outcomes during airstrike planning.Story continues below this adThe revelations have sparked controversy because the alleged use of the technology came after the Trump administration directed federal agencies to stop using Anthropic’s AI tools, highlighting the military’s reliance on advanced AI systems for modern warfare.What does this mean for future use of AI in wars?The dispute reflects a deeper clash between Silicon Valley’s attempts to set ethical boundaries for AI and the Pentagon’s desire for unrestricted access to cutting-edge technologies. While the US military argues that it should be able to deploy AI tools for “all lawful purposes,” Anthropic has maintained that private companies should retain some control over how their models are used.The fallout has also spilled into the defence tech ecosystem. Companies such as Palantir Technologies, which integrate AI systems into military software platforms, have acknowledged that their tools remain linked with Claude even as the Pentagon attempts to transition away from Anthropic’s technology.Story continues below this adMeanwhile, Anthropic has challenged the Pentagon’s designation in court, arguing that the “supply chain risk” label is unjustified and politically motivated. At the same time, internal Pentagon memos suggest the department may allow limited exemptions where Anthropic’s tools remain critical to national security operations.Soumyarendra Barik is a Special Correspondent with The Indian Express, specializing in the complex and evolving intersection of technology, policy, and society. With over five years of newsroom experience, he is a key voice in documenting how digital transformations impact the daily lives of Indian citizens. Expertise & Focus Areas Barik’s reporting delves into the regulatory and human aspects of the tech world. His core areas of focus include: The Gig Economy: He extensively covers the rights and working conditions of gig workers in India. Tech Policy & Regulation: Analysis of policy interventions that impact Big Tech companies and the broader digital ecosystem. Digital Rights: Reporting on data privacy, internet freedom, and India's prevalent digital divide. Authoritativeness & On-Ground Reporting: Barik is known for his immersive and data-driven approach to journalism. A notable example of his commitment to authentic storytelling involves him tailing a food delivery worker for over 12 hours. This investigative piece quantified the meager earnings and physical toll involved in the profession, providing a verified, ground-level perspective often missing in tech reporting. Personal Interests Outside of the newsroom, Soumyarendra is a self-confessed nerd about horology (watches), follows Formula 1 racing closely, and is an avid football fan. Find all stories by Soumyarendra Barik here. ... Read More © The Indian Express Pvt LtdTags:Explained Sci-TechExpress Explained