Six months ago, the AI sector was looking pretty bubbly. Companies were plowing hundreds of billions of dollars, much of it borrowed, into building new data centers, but had no clear path to profitability. Experts and journalists, myself included, were comparing the AI build-out to the railroad bubble of the 1800s and the dot-com bubble of the ’90s, in which speculation led to overinvestment that eventually crashed the stock market. Even OpenAI CEO Sam Altman voiced public doubts. “Are we in a phase where investors as a whole are overexcited about AI?” he said last year. “My opinion is yes.”Today, however, we’re in a very different world. Software developers are adopting AI tools en masse and reporting astronomical productivity benefits. The worry that the country is building too many data centers now coexists with the fear that we won’t have enough of them to satisfy the public’s growing appetite for these products. And the company previously known as OpenAI’s junior competitor has become possibly the fastest-growing business in the history of capitalism. Anthropic’s revenue is increasing faster—much faster—than Zoom’s during the pandemic, Google’s during the early 2000s, and even Standard Oil’s during the Gilded Age. If the company’s current growth rate were to continue, then by early next year it would be taking in more money than any company in the world.The cause of this turnaround can be summarized in two words: Claude Code.When Anthropic released an update to its flagship product in November, AI seemed to cross some invisible threshold between interesting gadget and life-changing technology. With Claude Code, a team of autonomous AI agents could take over your computer and, in minutes or hours, complete programming tasks that previously would have taken humans days or weeks. In many cases, the final product required few, if any, human changes. Other companies have since released updates to their own coding tools, such as OpenAI’s Codex and Anysphere’s Cursor, which are considered nearly as impressive as Claude Code. “This really was a step change,” Ethan Mollick, a co-director of the Generative AI Lab at the University of Pennsylvania, told me. “For years now, we’ve been in an era of chatbots that mostly just say things. Now we’ve officially crossed into the era of agents that can actually do things.” The implications are enormous for any industry that relies heavily on software. Jordan Nanos, a member of the technical staff at the semiconductor-research firm SemiAnalysis, told me that his small team produces four times as much software as it did last year despite having the same number of employees. Tim Fist, the director of emerging-technology policy at the Institute for Progress, told me that “it feels sort of ridiculous” to be working on his computer-science Ph.D., because “Claude can basically do 90 percent of it.” Meta recently announced that it will lay off 10 percent of its workforce; a few months ago, Mark Zuckerberg told investors that, thanks to AI, “projects that used to require big teams” can “now be accomplished by a single very talented person.”Academic research backs up these anecdotal claims. Last year, the think tank Model Evaluation & Threat Research ran an experiment in which software developers were randomly assigned to do coding tasks with or without the use of AI. To everyone’s surprise, developers completed tasks 20 percent slower when using AI, in part because they were spending so much time correcting the AI’s output. (That study factored heavily into an article I wrote in September suggesting that AI was indeed a bubble.) Recently, however, the same researchers re-ran the experiment using the latest AI coding tools. This time, the same developers completed tasks almost 20 percent faster with AI than those without it. And that’s probably an underestimate, because some power users had become so hooked on AI tools that they refused to participate in the second experiment.Now that AI is providing clear productivity benefits, companies have few qualms about spending money on it. By one estimate, the percentage of American businesses with a paid subscription to at least one AI tool or service has risen from about a quarter at the beginning of 2025 to over half today. Researchers at Goldman Sachs who conducted interviews with 40 software companies about their AI use in mid-April found that many were “overrunning their initial budgets” for AI tools “by orders of magnitude,” with some companies already spending as much as 10 percent of their total engineering labor costs. “It typically takes enterprises much much longer to adapt to new technologies than it takes consumers,” Gabriela Borges, a software analyst at Goldman Sachs, told me. “So the speed at which we’re seeing companies adapting these tools is actually quite surprising.”This dynamic has turned the economics of AI upside down. Six months ago, data-center investments appeared to be getting ahead of demand; today, demand is rising so fast that AI companies lack the physical infrastructure to satisfy it. Anthropic has been forced to limit customers’ use of its coding tools during “peak hours,” and OpenAI has scrapped its video-generation app to free up computing power. Semiconductors are in such high demand that even Nvidia’s fourth-best AI chip, released back in 2022, costs more today than it did three years ago.When demand for your product outpaces supply, you tend to make a lot of money. In just the past two months, Anthropic’s annual run rate—the amount the company is on track to make in the next year based on the current month’s revenue—has gone from $14 billion to $30 billion. As Axios’s Jim VandeHei recently pointed out, Anthropic grew four times as much during the first quarter of this year than Google did over three years during its peak expansion. And although Anthropic is the standout, the rest of the sector is growing quickly too. OpenAI’s annualized revenue increased by nearly 20 percent from December to February. Google, Microsoft, and Amazon reported in February that their cloud revenue had grown by 48 percent, 39 percent, and 24 percent respectively, compared to the year prior, largely driven by AI firms using their services. CoreWeave, a “neo-cloud” company that rents out chips and data-center space to AI companies, saw its annual revenue grow by 168 percent last year; the chipmaker Micron’s revenue nearly tripled. “It’s very important to emphasize that this pace of revenue growth is absolutely not normal,” Azeem Azhar, a widely cited AI-industry analyst, told me. “Even the biggest AI boosters, myself included, have been caught by surprise by just how fast these companies are taking off.”Perhaps most important, the AI models behind all of this revenue growth keep getting better. In early April, Anthropic announced Mythos, a new model apparently so powerful that the company did not release it to the public. Mythos has blown away just about every benchmark of AI progress, including completing complex coding tasks and solving graduate-level problems across a range of subjects. (It also has discovered cybersecurity vulnerabilities that had gone undetected by humans for decades, hence its limited release.) OpenAI’s newly released GPT-5.5 isn’t far behind. “On basically every indicator we have, we were already seeing a big acceleration in the pace of AI progress,” Jean-Stanislas Denain, a senior researcher at Epoch AI, a think tank that measures AI capabilities, told me. “And that was before Mythos.”Some people, however, still believe that the AI sector only appears to be on solid footing. In this telling, surface-level indicators are masking what is, in fact, the peak of a speculative frenzy.Flagship AI companies, including OpenAI and Anthropic, might be bringing in lots of revenue, but they aren’t yet profitable. They are still spending all of that money and more to cover the cost of developing their next model. In order for these companies to turn a profit, their revenues need to continue growing quickly for at least a few more years. (Anthropic expects to turn a profit in 2028 and OpenAI in 2030.) The question is whether their current growth rates are sustainable.The pessimistic case starts from the premise that software development is different from the rest of white-collar work. Coding involves huge amounts of training data, a relatively limited range of possible outcomes, and outputs that can be objectively evaluated—all of which makes it ideally suited for AI automation. That isn’t true of all knowledge work. A legal brief or marketing campaign cannot be quickly checked against some objective measure of excellence, and relatively little domain-specific data exists to train bots on such tasks. That could make companies in those fields less willing to spend on AI products. “Even if white-collar workers use these AI tools for some things, it won’t look like anything close to what we’re seeing right now for coders,” Paul Kedrosky, a managing partner at SK Ventures and research fellow at MIT who has become a prominent proponent of the bubble thesis, told me.AI companies are investing even more money into chips and infrastructure in anticipation of even more demand. But if the current boom turns out to be limited to coding, then by the time the new data centers are built, there won’t be enough customers to pay for them. Instead of turning a profit, the AI companies—not to mention the chipmakers, data-center builders, and cloud providers—will be stuck with huge losses on their books. At that point, the AI bubble will be even bigger than it was six months ago, and the pop could be even more painful. “The best analogy to me is the real-estate market in 2006, 2007,” Kedrosky said. “Market hype leads to more demand. More demand makes you think you need more supply. Before you know it, you’ve built more homes than anyone can actually afford. And eventually it all falls apart.”This is where a debate superficially about finance turns out to hinge on deeper philosophical questions about the nature of human work. A separate school of thought holds that most knowledge-work tasks share the same basic structure, and thus can be automated. As a group of analysts at SemiAnalysis recently argued, all knowledge work, including coding, is made up of four basic components: consuming information (“Read”), applying existing knowledge (“Think”), producing a structured output (“Write”), and checking that output against some standard (“Verify”). Coding might have certain qualities that make it easier for AI to perform this basic four-step process—such as more data to read and objective standards to verify an output—but that doesn’t make the field unique.For instance, even if no objective standard for a “good” academic paper or legal brief exists, experts in those fields tend to have a clear sense of better or worse. Perhaps AI systems could develop such a sense if given enough high-quality examples to learn from. “There’s clearly a spectrum here, with coding on one end and things with really hard-to-judge outputs, like short-form fiction writing, on the other,” Mollick, the University of Pennsylvania professor, told me. “But a lot of knowledge work—law, finance, consulting, marketing—falls somewhere in the middle. And many of the tasks in those jobs are probably closer to the coding side of things.”As a professional writer, I find this suggestion unpalatable. But the evidence in favor of it is growing. A recent MIT study attempted to quantify the ability of AI systems to perform some 3,000 real-world white-collar tasks, such as designing an education curriculum and creating a product-launch plan. After the AI models performed the tasks, the researchers asked human experts to rate the output. Any output that human reviewers considered good enough to be sent to a manager with no human edits was considered “complete.”In mid-2024, leading AI models were able to successfully complete 50 percent of white-collar tasks that would take a human three to four hours to complete; just over a year later, they were able to complete 65 percent. At that rate, the authors estimate, AI systems will be able to complete 80 to 95 percent of text-based tasks by 2029. “This pace of improvement isn’t quite as fast as what we’ve seen with AI and coding,” Matthias Mertens, one of the co-authors of the study, told me. “But it’s still really, really fast.”That study considered only chatbots. So-called agentic tools, such as Claude Cowork, are capable of taking over a worker’s laptop and performing a whole suite of noncoding tasks, such as creating PowerPoint decks, sending emails, and scheduling meetings. And workers are only beginning to learn how to use them. Azhar, the AI-industry analyst, told me that when he and his colleagues are planning to launch a new product, they will have their AI agents create a panel of artificial customers broadly representative of their actual customer base, conduct a focus group with these robot customers, produce a report based on what they’ve found, and then turn that report into a list of specific product improvements. All of this happens while the human product managers are sleeping; the end result is waiting for them when they wake up. “That is the kind of process that used to require a whole team of workers and months of time,” Azhar said. “Now we’re doing it three times every week.”Six months ago, people arguing that AI was a bubble were pointing to real-world facts, whereas people arguing against the bubble hypothesis were making speculative promises about the future. Today, the roles have reversed. AI’s explosive growth may yet encounter some new unforeseen obstacle. But the burden of proof has shifted to the naysayers.