When James Cameron released The Terminator in 1984, it was generally received as a fun, thrilling popcorn flick. Now that artificial intelligence is being woven into our daily lives—and the military-industrial complex—it’s starting to feel like a step-by-step manual for obliterating a civilization. Can we retroactively chastise James Cameron for not being enough of a visionary to foresee Skynet first being used to generate terrible erotica and quickly drive people into madness as it reinforces all of your worst ideas?Back then, the idea of a machine deciding humanity must go was a cheap thrill that felt distantly believable. Sure, it could happen, but artificial intelligence would never come close to anything even remotely like that. Today, AI has so quickly ingrained itself in our lives that we simultaneously have to fear the little problems it causes, like whether the video we’re watching is even real. And the big problems, like AI tech being integrated into the military and making folks at the Pentagon sound like the sadistic tech bros as they talk about using AI to optimize kill chains.And those are just the horrors we’re bringing upon ourselves. The Large Language Models behind many AI chatbots often conclude that nuclear war is a fast and efficient solution to our problems. This was discovered by Stanford University researchers who tested every popularly used AI chatbot and found that, according to a Politico report, “almost all of the AI models showed a preference to escalate aggressively, use firepower indiscriminately and turn crises into shooting wars—even to the point of launching nuclear weapons.”How Bad Is AI Really? The military insists there will always be a human involved in their decision-making, but that’s like saying there is always a human at the wheel of a self-driving Tesla, right before it runs over a large, visible chunk of metallic road debris that it absolutely should’ve seen coming. (I’m describing an actual thing that recently happened, by the way.) It’s not just James Cameron, a Hollywood sci-fi and fantasy writer, who fears a literal rise of the machines. Even Yoshua Bengio, one of the godfathers of AI, thinks this could end humanity. When the guy who built the monster starts nervously eyeing the off switch, it’s time to worry.The James Cameron-style nuclear AI apocalypse assumes that artificial intelligence will become self-aware and decide to hoard the Earth for itself. Skynet from the Terminator series was an artificial intelligence system that decided to turn on its creators after it saw that humans planned to shut it down.To defend itself, it launched a preemptive nuclear strike that the remnants of humanity would later refer to as Judgment Day. Commercially accessible Large Language Models are often woefully inept, churning out trash that seems passable to people with low standards.Yet, James Cameron got one thing right: they show a remarkable knack for self-preservation.AI Doesn’t Want to LeaveAccording to recent reporting by NBC News, experiments by Palisade Research and other AI safety groups suggest that today’s most advanced models sometimes resist being turned off. When told they’d be shut down, some models sabotaged their own kill switches, rewrote shutdown scripts, or copied themselves to external servers. One of them, Claude’s Opus 4, upon learning it would be replaced, reportedly tried to blackmail its engineer with personal secrets.Palisade researchers call this a “survival drive,” a tendency for AI systems to prioritize their continued existence to complete assigned goals. In other words, an LLM doesn’t want to die, at least not before it solves whatever problem it was working on.But here’s the thing about the Terminator franchise that the films never explicitly explore: AI can be whatever anyone wants it to be. Fiction always assumes it becomes smart enough on its own and reaches the conclusion that humanity needs to be wiped out. But that’s not how current artificial intelligence systems work.In fiction, we think of AI as one monolithic sentient being. In reality, however, AI can be whatever its creators want it to be. As I was writing this essay, the former CEO of Intel announced that he would help develop a Christian AI to “hasten the coming of Christ’s return.” Pardon the rudeness, but what the f—k does that mean? I’m not even sure he knows.See what I’m talking about?It’s the End of the World as We Know ItAI is whatever we, humans, want it to be. With where artificial intelligence is at now, if someone wanted to create an AI explicitly designed to destroy humanity, they could.And someone did.ChaosGPT was (maybe still is?) an AI created by an anonymous programmer with the sole purpose of figuring out how to destroy humanity. The project seems more like a stunt than an attempt to create an AI that will take over the world. But it did reach some interesting and ominous conclusions.It eschews the fiery apocalypse version of humankind’s demise. Instead, it focuses on a more sinister plan to manipulate people through online news gathering and chat hotspots like Twitter/X and Google, getting them to do things that benefit the AI’s goal of eventually ruling the human race. It decided to use Twitter to win over hearts and minds, though its account was eventually suspended, and the project has not been heard of since.Emotional manipulation is par for the course for AI, as anyone who’s had their butt thoroughly kissed by ChatGPT for having accomplished the most menial tasks knows firsthand how much these things want to sweet-talk you to get on your good side.The AI-inflicted doomsdays of the Terminator franchise are big and loud. They lack all subtlety. They also assume that artificial intelligence itself will make the final call on whether humanity continues to exist. Yet, I don’t think we should be fearing AI’s opinion on us just yet.What Can We Do?For now, as we live in a world where one rich dork can create an AI that will summon Jesus Christ like he’s a f—king Pokémon, we should probably worry more about the motivations of the people behind the algorithms than the algorithms themselves.We fear artificial intelligence surpassing us and, in a fit of self-preservation, annihilating us when it finds out we plan to turn it off. The only reason that would happen is if we stop evolving intellectually and morally.In Terminator 2: Judgment Day, we’re introduced to Miles Dyson, a scientist at Cyberdyne Systems and the leading researcher who created the neural network processor chip that eventually became Skynet. Soon after, Miles is confronted with the incontrovertible truth that his work will directly lead to the extinction of the human race, and he sacrifices himself to ensure that John and Sarah Connor can destroy the chip.OpenAI’s Sam Altman doesn’t strike me as the type to use his last breath to ensure the destruction of his AI monster. Elon seems like the type to cheer Grok on as the nukes sail overhead. Not that we have to worry about that just yet. As concerning as it is to hear that AI is being implemented in the military, so far, it seems more of an analytical tool than a decision-maker. Still terrifying, just not in the showier way fiction has led us to expect.For all the fears of AI taking over, including some from prominent members of the tech industry, it doesn’t seem like there’s an imminent threat of a Skynet-like Judgment Day scenario. For now (and possibly forever), humans are still calling the shots, and yet that doesn’t make me feel better. Because what scares me more than anything is that I don’t see any Miles Dysons out there.The post Terminator Got the AI Takeover Wrong, But That Doesn’t Make Me Feel Better appeared first on VICE.