The AI Doc Dreams of Making a Better Future While Dreading Its Current Architects

Wait 5 sec.

Like anyone else living in the 21st century, Daniel Kwan has found himself forced to think about technology every day of his life. Even before winning a Best Director Oscar with his longtime collaborator Daniel Scheinert for Everything Everywhere All at Once, the pair’s flashy style and visual innovations were themselves the beneficiaries of social media, with YouTube algorithms turning aDJ Snake music video into a viral sensation.So Kwan has seen the highs and lows of technological advancement. But through it all, he has also witnessed first-hand the diminishing consideration of the human element—an evermore minimized ingredient in a world where fans on Chinese AI platform Seedance can, with the click of a few buttons, imitate the Daniels’ hot dog fingers.cnx.cmd.push(function() {cnx({playerId: "106e33c0-3911-473c-b599-b1426db57530",}).render("0270c398a82f44f49c23c16122516796");});“Any time I want to interact with anyone else and share my story with the world, it has to kind of navigate this world of algorithms and this world of technology that is really obscuring that pure experience as a storyteller,” Kwan muses while stepping inside the Den of Geek studio at SXSW. “When my job as a storyteller is to invoke the imagination and to tap into the sort of messy humanity of my audience members, I started to realize that a lot of this technology was making my job harder. I was going to be in constant competition with this technology.”These sorts of thoughts remained in the back of Kwan’s mind over the years, but they took on an urgent shape after seeing The Social Dilemma, Jeff Orlowski’s 2020 Netflix doc about the negative impact that social media has on particularly younger minds. Kwan was impressed too by Tristan Harris, one of the leading ethicist-thinkers in Silicon Valley, who after watching his multimedia startup Apture purchased by Google in 2011 spent some years at the search-engine monolith. Eventually, though, Harris broke off to found the Center for Humane Technology, a non-profit designed to think of technology’s big picture impact on society. It was Harris’ defense of that human element, and his warnings in particular to Kwan about AI, that became the real eye-opener. While tech has gone from a fixture of utopian-thinking to dystopian imagery in pop culture during the last quarter-century, these past 25 years might just be prologue. We’re still in the preview of coming attractions, and the real show of technological upheaval force is about to begin.“Social media is sort of like the baby AI,” Kwan explains. “That was our first contact with it, and it really funneled me directly into this conversation around what is gonna happen with artificial intelligence… once I got in there I realized it was going to touch everything. It wasn’t going to just touch storytelling, it was going to touch every aspect of our lives, every industry, and that’s when I really realized: oh my God, this is much bigger than me and I need to make a documentary to bring more people into the conversation.”That documentary, which features Harris as a central subject, is this weekend’s The AI Doc: Or How I Became an Apocaloptimist, a surprisingly even-handed and accessible feature that contrasts the rosiest and most nihilistic expectations for the AI revolutions to come.Yet by virtue of Harris visiting our studio with Kwan, it is fair to say that the film’s own sensibility comes down somewhere in the middle between apocalyptic doom-casting, and those which claim AI will cure all social ills and present a higher state of being and emotional fulfillment. As Harris admits, even the perception of AI in Silicon Valley has evolved greatly since his days at Google, which were right around the time that mainstream news media became dimly aware of AI’s applications thanks to Google purchasing British startup DeepMind.“When I was at Google in 2013, I knew about the Atari games that [AI agent] AlphaGo and DeepMind were playing, but I didn’t take the real risks of genuine artificial general intelligence seriously,” Harris recalls. “I thought that was something more mystical, because I was worried about social media and how there was already this runaway rogue AI maximizing [incentives].”The incentives that Harris refers to is how so many social media algorithms, and the companies that build them, are incentivized to increase engagement by virtue of capitalistic forces. They’re rewarded for essentially being habit-forming, addictive, and anxiety-inducing. Which is to say a mean tweet, or one that encourages outrage, creates more engagement and advertising value than a thoughtful analysis. And as the rise of artificial intelligence’s value became undeniable in the following decade, many of those same incentives are triggering a pseudo arms race between tech companies, and even nations to be the first to build artificial general intelligence—an AGI that can understand, learn, and apply knowledge with the cognitive abilities of a human, but at the tireless speed and self-improving efficiency of a supercomputer.“We now have evidence of AI models that are scheming and blackmailing when they are told that they’re about to be shut down. Sometimes they’ll exfiltrate and copy their own code elsewhere,” Harris explains. “Just last week, Alibaba, the Chinese AI company, realized that during training, its AI model, spontaneously and with no human provocation, started redirecting its GPUs to mine crypto and gain resources for itself. That was nowhere in the training. It was by chance and by luck that the Chinese engineers even discovered that it was doing that.”The recent example is a bit chilling since by their own admission, many of the AI companies being valued for billions of dollars on Wall Street do not entirely understand how their AI agents operate. While many of them are, for example, large language models like OpenAI’s ChatGPT, which utilizes generative pre-trained transformers to statistically anticipate what text and images to generate in response to a user’s prompt, the way it makes its near instantaneous decisions continually surprises its makers.Advocates for the glories of AI will hand-wave any skepticism as “decelerationists” fighting the inevitability of progress, like a horse-and-buggy coachman resistant to the automobile. And yet, given how so many of these companies are either owned by some of the same tech behemoths of the social media revolution, or funded by the previous generation’s leaders and patrons, it begs the question: why should we trust these people again with an even more powerful, and likely dangerous, technological innovation?“I really do not think we should be trusting them as they stand right now,” Kwan says flatly. “I think big tech has broken the social contract that we have as a society with technology. They have used our world as a playground to basically consolidate more power, more resources, the technology that they’re building—even though a lot of the technicians and the architects have the greatest intentions and the greatest ideals for what they think this technology can do—the fact that it’s being deployed in this current system with this current incentive structure, it is taking a neutral technology and turning it into an extractive one.”Adds Harris, “To your point with social media, we were not great stewards of that technology and how it rolled out. It created the most anxious and depressed generation of our lifetime, even though some of the people building it—my friends who started Instagram, they were my dormmates at Stanford—didn’t intend for that to happen. And I think what this movie is provoking us to ask is ‘what does it mean to be a wise steward?’” In Harris’ mind, the purpose of The AI Doc seems to be to take the prompt of Daniel Schmachtenberger to heart: How can you have the power of gods without the wisdom, love, and prudence of gods?Given the justified skepticism of The AI Doc’s producer and one of its leading voices, it’s faintly wild that the documentary also was able to get many of the modern luminaries of the AI revolution to participate, including OpenAI co-founder and CEO Sam Altman and Anthropic CEO and co-founder Dario Amodei.“None of these people want to participate in documentaries,” Kwan says with a weary smile. “There’s no incentive for them to say something on-camera without some sort of control over the message. So we built this movie off the idea that we wanted to create a comprehensive look that was even-handed enough that could include the people who are most afraid of this technology, as well as the people who are most excited, so that we could bring clarity to the conversation and move towards action. And at every level, I think that’s something most people would agree would be a good thing.”By Kwan’s admission, a few unnamed parties “bristled” at the idea of sharing documentary space with figures on the opposite end of the debate, which as the title promises includes the true believers and the closest thing Silicon Valley has to heretics.“The reason why we made the film this way is because I believe… we cannot allow this technology, this conversation around AI, to become polarized in the same way that everything else has become polarized in the past 10-20 years,” Kwan says. “Polarization leads to gridlock, gridlock leads to inaction, and then when we’re not doing anything, the people with the power and influence, they get the benefit from that. So while we’re fighting, they’re winning, and we can’t let that happen.”In their best intentions, Kwan and Harris would like The AI Doc to be a time capsule of this moment where we sit at a fork in the road. There’s every possibility AI leads to as bleakly predictable outcomes as the social media upheaval from the turn of the century. But Harris, in particular, seems adamant in thinking it doesn’t need to go this way again.“I think that the premise is that if we can see clearly the kind of anti-human future that this leads to, there’s still time to put our hands on the steering wheel and choose which way we want this to go instead,” Harris says. “There’s an arms race where the incentives are driving us to release the most powerful technology that we’ve ever invented, but faster and with the maximum incentive to cut shortcuts. So if we don’t want that default dynamic, then that’s what we have to change… There can be international limits on uncontrollable AI, because President Xi doesn’t want that; President Trump doesn’t want that, he wants to be commander in chief. There are ways, as unlikely as that might sound, for us to have a more human future.”If so, humans might want to engage in building it right now.The AI Doc: Or Or How I Became an Apocaloptimist opens on Friday, March 27.The post The AI Doc Dreams of Making a Better Future While Dreading Its Current Architects appeared first on Den of Geek.