The AI Industry Wants to Automate Itself

Wait 5 sec.

Late last month, a large crowd gathered in downtown San Francisco to demand that the AI industry stop developing more powerful bots. Holding signs and banners reading Stop the AI Race and Don’t Build Skynet, the protesters marched through the city and gave speeches outside the offices of Anthropic, OpenAI, and xAI. The crowd demanded that these companies halt efforts to create superintelligent machines—and, in particular, AI models that can develop future AI models. Such a technology, attendees said, could extinguish all human life.At AI protests and happy hours, inside start-ups and major companies, the tech world is in a frenzy over the same thing: Computers that make themselves smarter. Over the past year, the top AI companies have taken to loudly bragging about internal efforts to automate their own research. OpenAI recently released a new model it described as “instrumental in creating itself.” Within the next six months, the company aims to debut what it has described as an “intern-level AI research assistant.” Meanwhile, Anthropic says that as much as 90 percent of the company’s code is already written by Claude.“We are starting to see AI progress feed back on itself,” Nick Bostrom, an influential Swedish philosopher who studies AI risk, told us. Within Silicon Valley, many insiders believe that we are teetering on the precipice of a world in which AI can rapidly improve its own capabilities. Instead of waiting for months between new machine-learning breakthroughs, we might wait weeks. Imagine AI advancing faster and faster.The idea of self-improving bots is nothing new. When the statistician I. J. Good first introduced the concept of recursive self-improvement in the 1960s, he wrote that machines capable of training their own, even more capable successors would be “the last invention” society ever needed to make. But just a few years ago, any notion of actually making such AI models was on the back burner. When ChatGPT couldn’t reliably add and subtract, let alone search the web, the notion that AI programs would soon be able to do world-class machine-learning research seemed laughable. Even as tech companies made claims about the imminent arrival of “artificial general intelligence,” the capabilities needed for a bot to accelerate or even direct AI research seemed to exceed those of AGI.[Read: Do you feel the AGI yet?]Now, as AI models have become significantly better at coding, Silicon Valley has become hooked on the idea of self-improving machines. AI research involves a lot of gruntwork—curating large data sets, running repeated experiments—that can be made more efficient with the help of coding bots. Dario Amodei, Anthropic’s CEO, has estimated that coding tools speed up his company’s overall workflows by 15 to 20 percent.But the information that top AI firms share about how and the extent to which they have automated internal research is patchy at best. When Anthropic says that Claude writes almost all of its code, we don’t know how much human supervision was required. (An Anthropic spokesperson declined a request for an interview, but pointed us to a recent podcast in which Jack Clark, the company’s head of policy, said one of his biggest priorities this year is to better understand “the extent to which we are automating aspects of A.I. development.”) There are also few details about OpenAI’s forthcoming AI “intern.”A company spokesperson described it to us as a system that could contribute to research workflows by, for instance, conducting literature reviews or interpreting results of experiments. (The Atlantic has a corporate partnership with OpenAI.) One concrete example of how AI is being used to automate research comes from Google DeepMind: Last year, the company developed an AI coding agent called AlphaEvolve, which according to research published by the firm was able to make Google’s global data-center fleet 0.7 percent more computationally efficient on average and cut the overall training time of Gemini by 1 percent.[Read: AI agents are taking America by storm]All of these current approaches to self-improving AI are not recursive but piecemeal. AI tools can write code, find small optimizations, and generally make discrete parts of the AI research process faster. It’s impressive that machines are able to at least incrementally improve their own abilities, but right now humans still play an essential role. AI research has many components: curating training data, proposing new hypotheses, setting up experiments to test them, and deciding how to allocate scarce computing resources. Eventually, the thinking goes, recursively self-improving AI models will make the leap from rote programming to having real research “taste”—as AI insiders call the mix of human creativity and judgment exhibited by top software engineers. Instead of humans coming up with ideas for new experiments, the bots will do this themselves.Many AI boosters and doomers alike believe that we’re not far from that future. Sam Altman says that by 2028, OpenAI plans to have developed a fully “automated AI researcher.” By then, “we are pretty confident we will have systems that can make more significant discoveries,” the company said in a recent blog post. Based on the speed of recent advances in AI, Eli Lifland, a researcher at the AI Futures Project, has forecast that AI research and development could be fully automated by 2032. After all, a few years ago, top models could successfully do only things that would take a human developer seconds; now they autonomously complete tasks that would take humans hours. “I don’t expect a reason for it to slow down,” Neev Parikh, a researcher at METR, a nonprofit that studies AI coding capabilities, told us.There are plenty of reasons to be skeptical that AI research will be fully automated over such a short time horizon. Coding bots are designed to execute directions, but developing an AI with research taste might require some kind of transformative breakthrough. Not to mention the various constraints on AI development—including the availability of funding, chips, and energy for data centers—that threaten to stall progress at any time. For now “the overall pipeline to realize this self-improvement loop is still yet to be developed,” Pushmeet Kohli, DeepMind’s vice president of science and strategic initiatives, told us. A bot can optimize things, but it doesn’t “have anything to optimize for,” Kohli said. “That’s where the human comes in.”[Read: Inside the dirty, dystopian world of AI data centers]Ultimately, even if the most fantastical dreams of recursive self-improvement turn out to be little more than a marketing ploy, marginal improvements in automating research are likely to further accelerate the pace of AI development. “This could change the dynamics of AI competition, alter AI geopolitics, and much more,” Dean Ball, a former Trump adviser on AI, recently wrote. Governments and civil society are already lagging. American institutions are in many ways still adapting to the internet—the IRS still processes tax returns using COBOL, a programming language that was released in 1960. Should AI models progress faster, public policy, including regulations on safety and security, has even less hope of keeping up. Bostrom, the philosopher, expressed a sort of resignation about the AI future when we spoke. He used to call himself a “fretful optimist,” he said, but now he’s a “moderate fatalist.”In a strange way, none of the predictions about recursive self-improvement need to be true for them to matter. Last year, a team of academics interviewed 25 leading researchers at DeepMind, OpenAI, Anthropic, Meta, UC Berkeley, Princeton, and Stanford. Twenty of them identified the automation of AI research as among the industry’s “most severe and urgent” risks. Now these dramatic warnings are gaining a growing audience. “Human beings could actually lose control over the planet,” Senator Bernie Sanders recently warned Congress, sounding just like the San Francisco protesters. Yet again, the AI industry has found a way to ratchet up the hype behind its technology.