AI's 'Cheerful Apocalyptics': Unconcerned If AI Defeats Humanity

Wait 5 sec.

The book Life 3.0 remembers a 2017 conversation where Alphabet CEO Larry Page "made a 'passionate' argument for the idea that 'digital life is the natural and desirable next step' in 'cosmic evolution'," remembers an essay in the Wall Street Journal. "Restraining the rise of digital minds would be wrong, Page contended. Leave them off the leash and let the best minds win..." "As it turns out, Larry Page isn't the only top industry figure untroubled by the possibility that AIs might eventually push humanity aside. It is a niche position in the AI world but includes influential believers. Call them the Cheerful Apocalyptics... "I first encountered such views a couple of years ago through my X feed, when I saw a retweet of a post from Richard Sutton. He's an eminent AI researcher at the University of Alberta who in March received the Turing Award, the highest award in computer science... [Sutton had said if AI becomes smarter than people — and then can be more powerful — why shouldn't it be?] Sutton told me AIs are different from other human inventions in that they're analogous to children. "When you have a child," Sutton said, "would you want a button that if they do the wrong thing, you can turn them off? That's much of the discussion about AI. It's just assumed we want to be able to control them." But suppose a time came when they didn't like having humans around? If the AIs decided to wipe out humanity, would he be at peace with that? "I don't think there's anything sacred about human DNA," Sutton said. "There are many species — most of them go extinct eventually. We are the most interesting part of the universe right now. But might there come a time when we're no longer the most interesting part? I can imagine that.... If it was really true that we were holding the universe back from being the best universe that it could, I think it would be OK..." I wondered, how common is this idea among AI people? I caught up with Jaron Lanier, a polymathic musician, computer scientist and pioneer of virtual reality. In an essay in the New Yorker in March, he mentioned in passing that he had been hearing a "crazy" idea at AI conferences: that people who have children become excessively committed to the human species. He told me that in his experience, such sentiments were staples of conversation among AI researchers at dinners, parties and anyplace else they might get together. (Lanier is a senior interdisciplinary researcher at Microsoft but does not speak for the company.)"There's a feeling that people can't be trusted on this topic because they are infested with a reprehensible mind virus, which causes them to favor people over AI when clearly what we should do is get out of the way." We should get out of the way, that is, because it's unjust to favor humans — and because consciousness in the universe will be superior if AIs supplant us. "The number of people who hold that belief is small," Lanier said, "but they happen to be positioned in stations of great influence. So it's not something one can ignore...." You may be thinking to yourself: If killing someone is bad, and if mass murder is very bad, then the extinction of humanity must be very, very bad — right? What this fails to understand, according to the Cheerful Apocalyptics, is that when it comes to consciousness, silicon and biology are merely different substrates. Biological consciousness is of no greater worth than the future digital variety, their theory goes... While the Cheerful Apocalyptics sometimes write and talk in purely descriptive terms about humankind's future doom, two value judgments in their doctrines are unmissable.The first is a distaste, at least in the abstract, for the human body. Rather than seeing its workings as awesome, in the original sense of inspiring awe, they view it as a slow, fragile vessel, ripe for obsolescence... The Cheerful Apocalyptics' larger judgment is a version of the age-old maxim that "might makes right"...Read more of this story at Slashdot.