When will AI automate all mental work, and how fast?

Wait 5 sec.

Published on May 31, 2025 4:18 PM GMTRational Animations takes a look at Tom Davidson's Takeoff Speeds model (https://takeoffspeeds.com). The model uses formulas from economics to answer two questions: how long do we have until AI automates 100% of human cognitive labor, and how fast will that transition happen? The primary scriptwriter was Allen Liu (the first author of this post), with feedback from the second author (Writer), other members of the Rational Animations team, and external reviewers. Production credits are at the end of the video. You can find the script of the video below.How long do we have until AI will be able to take over the world?  AI technology is hurtling forward. We’ve previously argued that a day will come when AI becomes powerful enough to take over from humanity if it wanted to, and by then we’d better be sure that it doesn’t want to.  So if this is true, how much time do we have, and how can we tell? AI takeover is hard to predict because, well, it’s never happened before, but we can compare AI takeover to other major global shifts in the past.  The rise of human intelligence is one such shift; we’ve previously talked about work by researcher Ajeya Cotra, which tries to forecast AI by considering various analogies to biology. To estimate how much computation might be needed to make human level AI, it might be useful to first estimate how much computation went into making your own brain. Another good example of a major global shift, might be the industrial revolution: steam power changed the world by automating much of physical labor, and AI might change the world by automating cognitive labor.  So, we can borrow models of automation from economics to help forecast the future of AI. AI impact researcher Tom Davidson, in a report published in June 2023, used a mathematical model derived from economics principles to estimate when AI will be able to automate 100% of human labor.  You can visit “Takeoffspeeds.com” if you want to play around with the model yourself.  Let’s dive into the questions this model is meant to answer, how the model works, and what this all means for the future of AI. Davidson’s model is meant to predict two related ideas: AI timelines and AI takeoff speed.  AI timelines have to do with exactly when AI will reach certain milestones, in this model’s case automating a specific percentage of labor.  A short timeline would be if such AI arrives soon, while a long timeline would be the opposite. AI takeoff is the process where AI systems go from being much less capable than humans to much more capable.  AI takeoff speed is how long that transition takes: it might be “fast”, taking weeks or months; it might be “slow” requiring decades; or it might be “moderate”, taking place over a few years. At least in principle, almost any combination of timelines and takeoff speeds could occur: if AI researchers got stuck for the next half century but then suddenly built a superintelligence all at once on April 11, 2075, that would be a fast takeoff and a long timeline. One way to measure takeoff speeds is by looking at the time it takes us to go from building a weaker AI, somewhat below human capabilities, to building a stronger AI that’s more capable than humans.  Davidson defines the weaker AI as systems that can automate 20% of the labor humans do today, and the stronger AI as systems that can automate 100% of that labor. Let’s call these points ‘20%-AI’ and ‘100%-AI’. To estimate how long this process will take, Davidson approaches the problem in two parts. First he estimates how much more resources we’ll need to train 100%-AI than 20%-AI, and second, he estimates how quickly these resources will grow during this time. In his model, these resources can take two forms.  One is additional computer power and time, or “compute” for short, that can be used for training AI systems.  The other is better AI algorithms. If you use a better algorithm, you get better performance for the same compute, so this model assumes that algorithmic improvement reduces the amount of compute needed to develop a given AI system. To go from 20% automation to 100%, Davidson estimates we might need to increase our compute, and/or improve the efficiency of our algorithms, by about 10,000 times. For example, we could do this by using 1000 times more compute and making algorithms 10 times more efficient, or using 10 times more compute and making algorithms 1000 times more efficient, or any combination. This estimate of 10,000 times more, is very uncertain, the model incorporates scenarios where that number is as low as 10 times and as high as 100,000,000 times. The 10,000 times estimate was arrived at by considering several different reference points, like comparing animal brains to human brains, and looking at AI models that have surpassed humans in specific areas like strategy games. Now it is possible that developing superhuman AI will turn out to require a fundamentally different approach to the paradigm we’re currently using, and simply improving current techniques and using more resources won’t be enough.  In that case, no amount of compute would be enough to go from 20% to 100%, so this framework wouldn’t end up being applicable. But there is some evidence suggesting that today’s AI paradigm might be enough.  A lot of the recent rapid progress in AI has come from throwing more compute and data at the problem, rather than advancements in techniques.  Compare GPT-1 from 2018, which had trouble stringing multiple sentences together, to GPT-4 from 2023 which can write complete news articles, and which powers the paid version of the ChatGPT service as of 2024.  GPT-4 uses an improved version of what’s fundamentally pretty much the same algorithm as GPT-1. The ideas behind the models are very similar; the key difference is: GPT-4 was trained using about a million times more processing power.[1] Just how much total compute do we expect to need to reach our 100%-AI mark?  The estimate Davidson used for the model is 10^36 FLOPs using algorithms from 2022, with an uncertainty of 1000 times in either direction.  These requirements are colossal: even the lowest side of Davidson’s estimates for 100% automation, a training run of 10^34 FLOPs, would take the top supercomputer of 2022 so long that in order for it to be done with that computation today it would need to have begun working on it in the Jurassic period.[2]  Obviously, we aren’t going to wait around for that training run to finish.  Instead, Davidson expects AI will progress in three ways: investors will pour in more money to buy more chips; computer chips at a given price will continue to get more powerful; and AI software will improve and become able to use compute more efficiently. Buying more chips and designing better chips directly increases compute and gets us closer to the target.  Software improvements are modeled as a multiplier: if AI software in 2025 is twice as efficient with its hardware as AI software in 2024, then each computer operation in 2025 counts double compared to 2024.  So our “effective compute” at any given time is equal to our actual hardware compute times this software multiplier. Now that we understand our resource requirements, it’s time to add in the economics.  There are several interconnected factors that go into modeling how fast these resources will grow. The biggest factor in AI takeoff speed is how much AI itself will be able to speed up AI development. We can already see this starting to happen with large language models helping programmers to write code, to such an extent that some academics will avoid writing code and focus on other work on days when their LLM is down.[3] The more powerful this feedback loop, the faster takeoff will be.  Economists already have tools to model the effects of automation on human labor in other contexts like industrialization.  Davidson borrows a specific formula for this called the “CES Production Function”.[4] Another major factor is that as AI becomes more impressive, it will attract more investment.  To model this feedback loop, Davidson’s model has investment rise more quickly once AI capabilities reach a certain threshold. Davidson also throws in a few other parameters.  These include how easy it is to automate AI research, and how much an AI’s performance can improve after it’s been trained by people figuring out better ways to use it, like how asking LLMs to lay out their reasoning step by step or picking the best result from many attempts can improve the quality of their answers. With all this accounted for, it’s time to actually run the calculations.  For this, Davidson uses a Monte Carlo method: each run of the model randomly selects a value for each of the inputs from a distribution within the constraints we’ve discussed.  These values slot into the equations, and we get one possible scenario for the future.  By repeating this process many times with different values for the inputs, we can build up a full picture of the range of AI futures that our estimates imply. Let’s start with the headlines: the model’s median prediction is that AI will be able to automate all human labor in the year 2043, with takeoff taking about 3 years.  So in this scenario, 20% of current human labor would be automatable by 2040, and 100% would be automatable in 2043.  This is only the middle of a very broad range of possibilities, however: the model gives a 10% chance that 100%-AI comes before 2030, and a 10% chance that it comes after 2100.  For takeoff speed, there’s a 10% chance that it takes less than 10 months, and a 10% chance it takes more than 12 years. On the Takeoffspeeds.com website, you can rerun the model using different values for the inputs.  These include all the inputs we’ve already mentioned, along with others like inputs representing how easy it is to automate R&D in hardware and AI software, and in the economy as a whole. There are a few major takeaways from Davidson’s model even beyond the specific dates for AI milestones.  One is the answer to this work's original motivating question: even in a scenario with no major jumps in AI progress, a continuous takeoff, AI could easily race past human capabilities in just a few years or even less. Another takeaway is that there are many different factors working together to shorten AI timelines.  These include: increasing investment as AI continues to improve; the ability of AI to speed up further AI development even before it reaches human level capabilities; rapid progress in AI algorithms; and the fact that training an AI takes much more compute than running it does. If you have enough compute to train an AI system, you have enormously more compute than you need to run it. So if you have techniques that let you spend extra compute to get better performance, this could boost the system a lot. One final takeaway is that it’s very hard to find a realistic set of inputs to this model that doesn’t get us to AI that can perform any cognitive task by around 2060.  Even if AI progress in general is slower than we expect, and reaching human capabilities is a harder task for AI than we expect, it’s very unlikely that world-changing AI systems are more than a few decades away. Importantly this does depend on the assumption that it's possible to build superhuman AI with the current paradigm, although of course new paradigms may also be developed. This model, like any model, has its limitations.  For one, any model is only as good as the assumptions that went into it, though guessing numbers and making a model is usually better than just making a guess of a final answer.  See our videos on Bayesian reasoning and prediction markets for more on that point.  Davidson outlines the reasoning behind each assumption in his full report.  He also discusses some factors that weren’t included in the model, like the amount of training data that advanced AI models would need. More generally, models like these are meant to give us the tools to think about the worlds they describe.  Economics itself is not a field known for making perfect predictions of the long term future, but it’s given us a toolbox for understanding markets and human behavior that has proven incredibly useful.  Hopefully, by applying those same strategies to forecasting AI, we can better prepare ourselves for whatever the future has in store for us.^https://epochai.org/data/epochdb/table^https://www.wolframalpha.com/input?i=%2810%5E34+%2F+%281.679*10%5E18%29%29+seconds^https://www.youtube.com/watch?v=nNSHP8L-K_I (38:23)^https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/#0-short-summary-Discuss