少点错误 06月01日 00:27
When will AI automate all mental work, and how fast?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Rational Animations分析了Tom Davidson的Takeoff Speeds模型,该模型运用经济学公式预测AI自动化人类认知劳动的时间和速度。模型预测AI将在2043年自动化所有人类劳动,过程耗时约3年。该模型考虑了计算能力增长、算法改进以及AI自身加速AI发展的因素。尽管存在不确定性,模型仍提供了一个理解AI发展时间线和速度的框架,并强调了AI对经济和社会潜在的变革性影响。用户可以在Takeoffspeeds.com网站上调整参数,探索不同的AI发展情景。

🗓️ Davidson的模型旨在预测AI时间线和AI起飞速度。AI时间线指的是AI达到特定里程碑的确切时间,在本模型中是指自动化特定百分比的劳动。AI起飞是指AI系统从远不如人类到远胜于人类的过程,AI起飞速度是指这一转变所需的时间。

🚀 模型估计,从20%自动化到100%自动化,可能需要将计算能力和/或算法效率提高约10,000倍。这一估计是通过比较动物大脑与人类大脑,以及考察在特定领域超越人类的AI模型等不同参考点得出的。

💡 模型考虑了几个相互关联的因素来模拟这些资源增长的速度。AI本身加速AI开发是AI起飞速度的最大因素。模型还考虑了随着AI变得越来越令人印象深刻,它将吸引更多的投资。

💰 Davidson的模型使用蒙特卡罗方法进行计算,每次运行模型都会从约束范围内的分布中随机选择每个输入的值。通过多次重复这个过程,我们可以全面了解我们的估计所暗示的AI未来的范围。

Published on May 31, 2025 4:18 PM GMT

Rational Animations takes a look at Tom Davidson's Takeoff Speeds model (https://takeoffspeeds.com). The model uses formulas from economics to answer two questions: how long do we have until AI automates 100% of human cognitive labor, and how fast will that transition happen? The primary scriptwriter was Allen Liu (the first author of this post), with feedback from the second author (Writer), other members of the Rational Animations team, and external reviewers. Production credits are at the end of the video. You can find the script of the video below.


How long do we have until AI will be able to take over the world?  AI technology is hurtling forward. We’ve previously argued that a day will come when AI becomes powerful enough to take over from humanity if it wanted to, and by then we’d better be sure that it doesn’t want to.  So if this is true, how much time do we have, and how can we tell?

 

AI takeover is hard to predict because, well, it’s never happened before, but we can compare AI takeover to other major global shifts in the past.  The rise of human intelligence is one such shift; we’ve previously talked about work by researcher Ajeya Cotra, which tries to forecast AI by considering various analogies to biology. To estimate how much computation might be needed to make human level AI, it might be useful to first estimate how much computation went into making your own brain. Another good example of a major global shift, might be the industrial revolution: steam power changed the world by automating much of physical labor, and AI might change the world by automating cognitive labor.  So, we can borrow models of automation from economics to help forecast the future of AI.

 

AI impact researcher Tom Davidson, in a report published in June 2023, used a mathematical model derived from economics principles to estimate when AI will be able to automate 100% of human labor.  You can visit “Takeoffspeeds.com” if you want to play around with the model yourself.  Let’s dive into the questions this model is meant to answer, how the model works, and what this all means for the future of AI.

 

Davidson’s model is meant to predict two related ideas: AI timelines and AI takeoff speed.  AI timelines have to do with exactly when AI will reach certain milestones, in this model’s case automating a specific percentage of labor.  A short timeline would be if such AI arrives soon, while a long timeline would be the opposite.

 

AI takeoff is the process where AI systems go from being much less capable than humans to much more capable.  AI takeoff speed is how long that transition takes: it might be “fast”, taking weeks or months; it might be “slow” requiring decades; or it might be “moderate”, taking place over a few years.

 

At least in principle, almost any combination of timelines and takeoff speeds could occur: if AI researchers got stuck for the next half century but then suddenly built a superintelligence all at once on April 11, 2075, that would be a fast takeoff and a long timeline.

 

One way to measure takeoff speeds is by looking at the time it takes us to go from building a weaker AI, somewhat below human capabilities, to building a stronger AI that’s more capable than humans.  Davidson defines the weaker AI as systems that can automate 20% of the labor humans do today, and the stronger AI as systems that can automate 100% of that labor. Let’s call these points ‘20%-AI’ and ‘100%-AI’.

 

To estimate how long this process will take, Davidson approaches the problem in two parts. First he estimates how much more resources we’ll need to train 100%-AI than 20%-AI, and second, he estimates how quickly these resources will grow during this time. In his model, these resources can take two forms.  One is additional computer power and time, or “compute” for short, that can be used for training AI systems.  The other is better AI algorithms. If you use a better algorithm, you get better performance for the same compute, so this model assumes that algorithmic improvement reduces the amount of compute needed to develop a given AI system.

 

To go from 20% automation to 100%, Davidson estimates we might need to increase our compute, and/or improve the efficiency of our algorithms, by about 10,000 times. For example, we could do this by using 1000 times more compute and making algorithms 10 times more efficient, or using 10 times more compute and making algorithms 1000 times more efficient, or any combination. This estimate of 10,000 times more, is very uncertain, the model incorporates scenarios where that number is as low as 10 times and as high as 100,000,000 times. The 10,000 times estimate was arrived at by considering several different reference points, like comparing animal brains to human brains, and looking at AI models that have surpassed humans in specific areas like strategy games.

 

Now it is possible that developing superhuman AI will turn out to require a fundamentally different approach to the paradigm we’re currently using, and simply improving current techniques and using more resources won’t be enough.  In that case, no amount of compute would be enough to go from 20% to 100%, so this framework wouldn’t end up being applicable. But there is some evidence suggesting that today’s AI paradigm might be enough.  A lot of the recent rapid progress in AI has come from throwing more compute and data at the problem, rather than advancements in techniques.  Compare GPT-1 from 2018, which had trouble stringing multiple sentences together, to GPT-4 from 2023 which can write complete news articles, and which powers the paid version of the ChatGPT service as of 2024.  GPT-4 uses an improved version of what’s fundamentally pretty much the same algorithm as GPT-1. The ideas behind the models are very similar; the key difference is: GPT-4 was trained using about a million times more processing power.[1]

 

Just how much total compute do we expect to need to reach our 100%-AI mark?  The estimate Davidson used for the model is 10^36 FLOPs using algorithms from 2022, with an uncertainty of 1000 times in either direction.  These requirements are colossal: even the lowest side of Davidson’s estimates for 100% automation, a training run of 10^34 FLOPs, would take the top supercomputer of 2022 so long that in order for it to be done with that computation today it would need to have begun working on it in the Jurassic period.[2]  Obviously, we aren’t going to wait around for that training run to finish.  Instead, Davidson expects AI will progress in three ways: investors will pour in more money to buy more chips; computer chips at a given price will continue to get more powerful; and AI software will improve and become able to use compute more efficiently.

 

Buying more chips and designing better chips directly increases compute and gets us closer to the target.  Software improvements are modeled as a multiplier: if AI software in 2025 is twice as efficient with its hardware as AI software in 2024, then each computer operation in 2025 counts double compared to 2024.  So our “effective compute” at any given time is equal to our actual hardware compute times this software multiplier.

 

Now that we understand our resource requirements, it’s time to add in the economics.  There are several interconnected factors that go into modeling how fast these resources will grow.

 

The biggest factor in AI takeoff speed is how much AI itself will be able to speed up AI development. We can already see this starting to happen with large language models helping programmers to write code, to such an extent that some academics will avoid writing code and focus on other work on days when their LLM is down.[3] The more powerful this feedback loop, the faster takeoff will be.  Economists already have tools to model the effects of automation on human labor in other contexts like industrialization.  Davidson borrows a specific formula for this called the “CES Production Function”.[4]

 

Another major factor is that as AI becomes more impressive, it will attract more investment.  To model this feedback loop, Davidson’s model has investment rise more quickly once AI capabilities reach a certain threshold.

 

Davidson also throws in a few other parameters.  These include how easy it is to automate AI research, and how much an AI’s performance can improve after it’s been trained by people figuring out better ways to use it, like how asking LLMs to lay out their reasoning step by step or picking the best result from many attempts can improve the quality of their answers.

 

With all this accounted for, it’s time to actually run the calculations.  For this, Davidson uses a Monte Carlo method: each run of the model randomly selects a value for each of the inputs from a distribution within the constraints we’ve discussed.  These values slot into the equations, and we get one possible scenario for the future.  By repeating this process many times with different values for the inputs, we can build up a full picture of the range of AI futures that our estimates imply.

 

Let’s start with the headlines: the model’s median prediction is that AI will be able to automate all human labor in the year 2043, with takeoff taking about 3 years.  So in this scenario, 20% of current human labor would be automatable by 2040, and 100% would be automatable in 2043.  This is only the middle of a very broad range of possibilities, however: the model gives a 10% chance that 100%-AI comes before 2030, and a 10% chance that it comes after 2100.  For takeoff speed, there’s a 10% chance that it takes less than 10 months, and a 10% chance it takes more than 12 years.

 

On the Takeoffspeeds.com website, you can rerun the model using different values for the inputs.  These include all the inputs we’ve already mentioned, along with others like inputs representing how easy it is to automate R&D in hardware and AI software, and in the economy as a whole.

 

There are a few major takeaways from Davidson’s model even beyond the specific dates for AI milestones.  One is the answer to this work's original motivating question: even in a scenario with no major jumps in AI progress, a continuous takeoff, AI could easily race past human capabilities in just a few years or even less.

 

Another takeaway is that there are many different factors working together to shorten AI timelines.  These include: increasing investment as AI continues to improve; the ability of AI to speed up further AI development even before it reaches human level capabilities; rapid progress in AI algorithms; and the fact that training an AI takes much more compute than running it does. If you have enough compute to train an AI system, you have enormously more compute than you need to run it. So if you have techniques that let you spend extra compute to get better performance, this could boost the system a lot.

 

One final takeaway is that it’s very hard to find a realistic set of inputs to this model that doesn’t get us to AI that can perform any cognitive task by around 2060.  Even if AI progress in general is slower than we expect, and reaching human capabilities is a harder task for AI than we expect, it’s very unlikely that world-changing AI systems are more than a few decades away.

 

Importantly this does depend on the assumption that it's possible to build superhuman AI with the current paradigm, although of course new paradigms may also be developed.

 

This model, like any model, has its limitations.  For one, any model is only as good as the assumptions that went into it, though guessing numbers and making a model is usually better than just making a guess of a final answer.  See our videos on Bayesian reasoning and prediction markets for more on that point.  Davidson outlines the reasoning behind each assumption in his full report.  He also discusses some factors that weren’t included in the model, like the amount of training data that advanced AI models would need.

 

More generally, models like these are meant to give us the tools to think about the worlds they describe.  Economics itself is not a field known for making perfect predictions of the long term future, but it’s given us a toolbox for understanding markets and human behavior that has proven incredibly useful.  Hopefully, by applying those same strategies to forecasting AI, we can better prepare ourselves for whatever the future has in store for us.

  1. ^
  2. ^
  3. ^
  4. ^


Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI自动化 AI时间线 AI起飞速度 经济模型
相关文章