It would take about 30 minutes for a nuclear-armed intercontinental ballistic missile (ICBM) to travel from Russia to the United States. If launched from a submarine, it could arrive even faster than that. Once the launch is detected and confirmed as an attack, the president is briefed. At that point, the commander-in-chief might have about two or three minutes at most to decide whether to launch hundreds of America’s own ICBMs in retaliation or risk losing the ability to retaliate at all. This is an absurd amount of time to make any consequential decision, much less what would potentially be the most consequential one in human history. While countless experts have devoted countless hours over the years to thinking about how a nuclear war would be fought, if one ever happens, the key decisions are likely to be made by unprepared leaders with little time for consultation or second thought. Key takeawaysIn recent years, military leaders have been increasingly interested in integrating artificial intelligence into the US nuclear command-and-control system, given their ability to rapidly process massive amounts of data and detect patterns. Rogue AIs taking over nuclear weapons are a staple of movie plots from WarGames and The Terminator to the most recent Mission: Impossible movie, which likely has some impact on how the public views this issue. Despite their interest in AI, officials have been adamant that a computer system will never be given control of the decision to actually launch a nuclear weapon; last year, the presidents of the US and China issued a joint statement to that effect. But some scholars and former military officers say that a rogue AI launching nukes is not the real concern. Their worry is that as humans come to rely more and more on AI for their decision-making, AI will provide unreliable data — and nudge human decisions into catastrophic directions.And so it shouldn’t be a surprise that the people in charge of America’s nuclear enterprise are interested in finding ways to automate parts of the process — including with artificial intelligence. The idea is to potentially give the US an edge — or at least buy a little time. But for those who are concerned about either AI or nuclear weapons as a potential existential risk to the future of humanity, the idea of combining those two risks into one is a nightmare scenario. There’s wide consensus on the view that, as UN Secretary General António Guterres put it in September, “until nuclear weapons are eliminated, any decision on their use must rest with humans — not machines.”By all indications, though, no one is actually looking to build an AI-operated doomsday machine. US Strategic Command (STRATCOM), the military arm responsible for nuclear deterrence, is not exactly forthcoming about where AI might be in the current command-and-control system. (STRATCOM referred Vox’s request for comment to the Department of Defense, which did not respond.) But it’s been very clear about where it is not. “In all cases, the United States will maintain a human ‘in the loop’ for all actions critical to informing and executing decisions by the President to initiate and terminate nuclear weapon employment,” Gen. Anthony Cotton, the current STRATCOM commander, told Congress this year.At a landmark summit last year, Chinese President Xi Jinping and then-US President Joe Biden “affirmed the need to maintain human control over the decision to use nuclear weapons.” There are no indications that President Donald Trump’s administration has reversed this position. But the unanimity behind the idea that humans should remain in charge of the nuclear arsenal obscures a subtler danger. Many experts believe that even if humans are still the ones making the final decision to use nuclear weapons, increasing reliance on AI by humans to make those decisions will make it more, not less, likely that those weapons will actually be used, particularly as humans start to place more and more trust in AI as a decision-making aid. A rogue AI killing us all is, for now at least, a far-fetched fear; a human consulting an AI on pressing the button is the scenario that should keep us up at night.“I’ve got good news for you: AI is not going to kill you with a nuclear weapon anytime soon,” said Peter W. Singer, a strategist at the New America think tank and author of several books on military automation. “I’ve got bad news for you: it may make it more likely that humans will kill you with a nuclear weapon.”Why would you combine AI and nukes?To understand exactly the threat AI’s involvement in our nuclear system poses, it is important to first grasp how it’s being used now.It may seem surprising given its extreme importance, but many aspects of America’s nuclear command are still surprisingly low-tech, according to people who’ve worked in it, in part due to a desire to keep vital systems “air-gapped,” meaning physically separated, from larger networks to prevent cyber attacks or espionage. Until 2019, the communications system that the president would use to order a nuclear strike still relied on floppy disks. (Not even the small hard plastic disks from the 1990s, but the bendy 8-inch ones from the 1980s.) The US is currently in the midst of a multidecade, nearly trillion-dollar nuclear modernization process, including spending about $79 billion to bring the nuclear command, control, and communications systems out of the Atari era. (The floppy disks were replaced with a “highly secure solid-state digital storage solution.”) Cotton has identified AI as being “central” to this modernization process.In testimony earlier this year, he told Congress that STRATCOM is looking for ways to “use AI/ML [machine learning] to enable and accelerate human decision-making.” He added that his command was looking to hire more data scientists with the aim of “adopting AI/ML into the nuclear systems architecture.”Some roles for AI are fairly uncontroversial, such as “predictive maintenance,” which uses past data to order new replacement parts before the old ones fail. At the extreme other end of the spectrum would be a theoretical system that could give AI the authority to launch nuclear weapons in response to an attack if the president can’t be reached. While there are advocates for a system like this, the US has not taken any steps toward building one, as far as we know. This is the kind of scenario that likely comes to mind for most people when it comes to the idea of combining nuclear weapons and AI, due in part to years of films in which rogue computers try to destroy the world. In another public appearance, Gen. Cotton referred to the 1983 film WarGames, in which a computer system called WOPR goes rogue and nearly starts a nuclear war: “We do not have a WOPR in STRATCOM headquarters. Nor would we ever have a WOPR in STRATCOM headquarters.”Fictional examples like WOPR or The Terminator’s Skynet have undoubtedly colored the public’s views on mixing AI and nukes. And those who believe that a superintelligent AI system could attempt on its own to destroy humanity understandably want to keep these systems far away from the most efficient methods humans have ever created to do just that. Most of the ways AI is likely to be used in nuclear warfare fall somewhere between smart maintenance and full-on Skynet. “People caricature the terms of this debate as whether it’s a good idea to give ChatGPT the launch codes. But that isn’t it,” said Herb Lin, an expert on cyber policy at Stanford University.One of the most likely applications for AI in nuclear command-and-control would be “strategic warning” — synthesizing the massive amount of data collected by satellites, radar, and other sensor systems to detect potential threats as soon as possible. This means keeping track of the enemy’s launchers and nuclear assets to both identify attacks when they happen and improve options for retaliation.“Does it help us find and identify potential targets in seconds that human analysts may not find for days, if at all? If it does those kinds of things with high confidence, I’m all for it,” retired Gen. Robert Kehler, who commanded STRATCOM from 2011 to 2013, told Vox. AI could also be employed to create so-called “decision-support” systems, which, as a recent report from the Institute for Security and Technology put it, don’t make the decision to launch on their own but “process information, suggest options, and implement decisions at machine speeds” to help humans make those decisions. Retired Gen. John Hyten, who commanded STRATCOM from 2016 to 2019, described to Vox how this might work. “On the nuclear planning side, there’s two pieces: targets and weapons,” he said. Planners have to determine what weapons would be adequate to threaten a given target. “The traditional way we did data processing for that takes so many people and so much time and money, and was unbelievably difficult to do. But it’s one of the easiest AI problems you can define, because it is so finite.”Both Hyten and Kehler were adamant that they do not favor giving AI the ability to make final decisions regarding the use of nuclear weapons, or even providing what Kehler called the “last-ditch information” given to those making the decisions.But in the unbelievable pressure of a live nuclear warfare situation, would we actually know what role AI is playing?Why we should worry about AI in the nuclear loopIt’s become a cliche in nuclear circles to say that it’s critical to keep a “human in the loop” when it comes to the decision to use nuclear weapons. When people use the phrase, the human they have in mind is probably someone like Jack Shanahan. A retired Air Force lieutenant general, Shanahan has actually dropped a B-61 nuclear bomb from an F-15. (An unarmed one in a training exercise, thankfully.) He later commanded the E-4B National Airborne Operations Center, known as the “doomsday plane” — the command center for whatever was left of the American executive branch in the event of a nuclear attack.In other words, he’s gotten about as close as anyone to the still-only-theoretical experience of fighting a nuclear war. Pilots flying nuclear bombing training missions, he said, were given the option of bringing an eyepatch. In a real detonation, the explosion could be blinding for the pilots, and wearing the eyepatch would keep at least one eye working for the flight home. But in the event of a thermonuclear war, no one really expected a flight home. “It was a suicidal mission, and people understood that,” Shanahan told Vox. In the final assignment of his 36-year Air Force career, Shanahan was the inaugural head of the Pentagon’s Joint Artificial Intelligence Center. Having seen both nuclear strategy and the Pentagon’s push for automation from the inside, Shanahan is concerned that AI will find its way into more and more aspects of the nuclear command-and-control system, without anyone really intending it to or fully understanding how it’s impacting the overall system. “It’s the insidious nature of it,” he says. “As more and more of this gets added to different parts of the system, in isolation, they’re all fine, but when put together into sort of a whole, is a different issue.”In fact, it has been malfunctioning technology, more than hawkish leaders, that has more often brought us alarmingly close to the brink of nuclear annihilation in the past. In 1979, National Security Adviser Zbigniew Brzezinski was woken up by a call informing him that 220 missiles had been fired from Soviet submarines off the coast of Oregon. Just before Brzezinski called to wake up President Jimmy Carter, his aide called back: It had been a false alarm, triggered by a defective computer chip in a communications system. (As he had rushed to get the president on the phone, Brzezinski decided not to wake up his wife, thinking that she would be better off dying in her sleep.)Four years later, Soviet Lt. Col. Stanislav Petrov elected not to immediately inform his superiors of a missile launch detected by the Soviet early warning system known as Oko. It turned out, the computer system had misinterpreted sunlight reflected off clouds as a missile launch. Given that Soviet military doctrine called for full-scale nuclear retaliation, his decision may have saved billions of lives.Just a few weeks after that, the Soviets put their nuclear forces on high alert in response to a US training exercise in Europe called Able Archer 83, which Soviet commanders believed may actually have been preparations for a real attack. Their paranoia was based in part on a massive KGB intelligence operation that used computer analysis to detect patterns in reports from overseas spies.“It’s all theory. It’s doctrine, board games, experiments, and simulations. It’s not real data. The model might spit out something that sounds unbelievably credible, but is it justified?”Retired Lt. Gen. Jack ShanahanToday’s AI reasoning models are far more advanced, but still prone to error. The controversial AI targeting system, known as “Lavender,” which the the Israeli military used to target suspected Hamas militants during the war in Gaza, reportedly had an error rate of up to 10 percent. AI models could also be vulnerable to cyberattacks or subtler forms of manipulation. Russian propaganda networks have reportedly seeded disinformation aimed at distorting the responses of Western consumer AI chatbots. A more advanced effort could do the same with AI systems meant to detect the movement of missiles or preparations for the use of a tactical nuclear weapon.And even if all the information collected by the system is valid, there are reasons to be concerned about AI systems recommending courses of action. AI models are famously only as useful as the data that’s fed into them, and their performance improves when there’s more of that data to process. But when it comes to how to fight a nuclear war, “there are no real-world examples of this with the exception of two in 1945,” Shanahan points out. “Beyond that, it’s all theory. It’s doctrine, board games, experiments, and simulations. It’s not real data. The model might spit out something that sounds unbelievably credible, but is it justified?”Stanford’s Lin points out that studies have shown humans often give undue deference to computer-generated conclusions, a phenomenon known as “automation bias.” The bias might be especially difficult to resist in a life-or-death scenario with little time to make critical decisions — and one where the temptation to outsource an unthinkable decision to a thinking machine could be overwhelming. Would-be Stanislav Petrovs of the AI era would also have to contend with the fact that even the designers of advanced AI models don’t often understand why they generate the responses they do. “It’s still a black box,” said Alice Saltini, a leading scholar on AI and nuclear weapons, referring to the internal operations of advanced reasoning models. “What we do know is that it’s highly vulnerable to cyberattacks and that we can’t quite align it yet with human goals and values.”And while it’s still theoretical, if the worst predictions of AI skeptics turn out to be true, there’s also the possibility that a highly intelligent system could deliberately mislead the humans relying on it to make decisions.The notion of keeping a human “in control over the decision to use nuclear weapons,” as Biden and Xi vowed last year, might sound comforting. But if a human is making a decision based on data and recommendations put forward by AI, and has no time to probe the process the AI is using, it raises the question of what control even means. Would the “human in the loop” still actually make the decision, or would they merely rubber-stamp whatever the AI says? The need for speedFor Adam Lowther, arguments like these miss the point. A nuclear strategist, past adviser to STRATCOM, and co-founder of the National Institute for Deterrence Studies, Lowther caused a stir among nuke wonks in 2019 with an article arguing that America should build its own version of Russia’s “dead hand” system. The dead hand, officially called Perimeter, was a system developed by the Soviet Union in the 1980s which would give human operators orders to launch the country’s remaining nuclear arsenal if a nuclear attack was detected by sensors and Soviet leaders were no longer able to give the orders themselves. The idea was to preserve deterrence even in the event of a first strike that wiped out the command chain. Ideally, that would discourage any adversary from attempting such a strike. The system is believed to still be in operation and former President Dmitry Medvedev referred to it in a recent threatening social media post directed at the Trump administration’s Ukraine policies. An American Perimeter-style system, Lowther says, would not be a ChatGPT-type program generating options on the fly, but an automated system carrying out commands that the president had already decided on in advance based on various scenarios. In the event the president was still alive and in a position to make decisions during a nuclear war, they would likely be choosing from a set of attack options provided by the nuclear “football” that travels with the president at all times, laid out on laminated sheets said to resemble a Denny’s menu. (This “menu” is shown in the recent Netflix film House of Dynamite.)Lowther believes AI could help the president make a decision in that moment, based on courses of action that have already been decided. “Let’s suppose a crisis happens,” Lowther told Vox. “The system can then tell the president, ‘Mr. President, you said that if option number 17 happens, here’s what you want to do.’ And then the president can say, ‘Oh, that’s right, I did say that’s what I thought I wanted to do.’”The point is not that AI is never wrong. It’s that it would likely be less wrong than a human would be under the most high-pressure situation imaginable.“My premise is: Is AI 1 percent better than people at making decisions under stress?” he says. “If the answer is that it’s 1 percent better, then that’s a better system.”For Lowther, the 80-year history of nuclear deterrence, including the near-misses, is proof that the system can effectively prevent catastrophe, even when mistakes occur. “If your argument is, ‘I don’t trust humans to design good AI,’ then my question is, ‘Why do you trust them to make decisions about nuclear weapons?’,” he said. The nuclear AI age may already be upon usThe encroachment of AI into nuclear command-and-control systems is likely to be a defining feature of the so-called third nuclear age, and may be already underway, even as national leaders and military commanders are adamant that they have no plans to give authority to use the weapons over to the machines. But Shanahan is concerned the allure of automating more and more of the system may prove hard to resist. “It’s just a matter of time until you’re going to have well-meaning senior people in the Department of Defense saying ‘Well, I’ve got to have this stuff.’” he said. “They’re going to be snowed by some big pitch” from defense contractors. Another incentive to automate more of the nuclear system may be if the US perceives its adversaries as gaining an advantage from doing so, a dynamic that has driven nuclear arms build-ups since the beginning of the Cold War. China has made its own aggressive push to integrate AI into its military capabilities. A recent Chinese defense industry study touted a potential new system that could use AI to integrate data from underwater sensors to track nuclear submarines, reducing their chance of escape to 5 percent. The report warrants skepticism — “making the oceans transparent” is a long-anticipated capability that is still probably a long way off — but experts believe it’s safe to assume Chinese military planners are looking for opportunities to use AI to improve their nuclear capabilities, as they work to build up their arsenal to catch up with the United States and Russia.Though the Biden-Xi agreement of 2024 may not have actually done much to mitigate the real risks of these systems, Chinese negotiators were still reluctant to sign onto it, likely due to suspicions that it was an American ruse to undermine China’s capabilities. It’s entirely possible that one or more of the world’s nuclear powers could increase automation in parts of their nuclear command-and-control systems simply to keep up with the competition. When dealing with a system as complex as command-and-control, and scenarios where speed is as disturbingly necessary as it would be in an actual nuclear war, the case for more and more automation may prove irresistible. And given the unstable and increasingly violent state of world politics, it’s tempting to ask if we’re sure that the world’s current human leaders would make better decisions than the machines if the nightmare scenario ever came to pass. But Shanahan, reflecting on his own time within America’s nuclear enterprise, still believes decisions with such grave consequences for so many humans should be left with humans. “For me, it was always a human-driven process, for better and worse,” he said. “Humans have their own flaws, but in this world, I’m still more comfortable with humans making these decisions than a machine that may not act in ways that humans ever thought they are capable of acting.”Ultimately, it’s fear of the consequences of nuclear escalation, more than anything else, that may have kept us all alive for the past 80 years. For all AI’s ability to think fast and synthesize more data than a human brain ever could, we probably want to keep the world’s most powerful weapons in the hands of intelligences that can fear as well as think. This story was produced in partnership with Outrider Foundation and Journalism Funding Partners.