Artificial Ignorance 04月30日 22:08
The Problem with AGI Predictions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了对AI 2027预测AGI(通用人工智能)发展时间线的质疑。作者最初对AI 2027持怀疑态度,但鉴于其作者此前对AI趋势的准确预测,开始认真审视该预测。文章深入分析了AI 2027的主要内容,并指出了作者对AGI时间线预测的三个主要疑虑,包括现实世界的复杂性、模型与产品的差异以及人类决策的重要性。

🤔 AI 2027项目预测了AGI发展的详细时间表,并提出了两种不同的未来情景:一种是人类成功控制AI发展,另一种是由于竞争压力导致AI快速发展,但缺乏安全措施,最终可能导致人类灭亡。

💡作者对AI 2027的预测持谨慎态度,但承认其作者此前对AI趋势的预测具有一定的准确性,例如预测了ChatGPT的出现、多模态转换器的重要性以及美国对AI芯片的出口管制等。

🧐 作者提出了他对AGI时间线预测的三个主要疑虑:首先,现实世界比模型复杂得多,难以预测;其次,模型性能不等于产品实际应用;最后,人类的决策和干预对AI的发展至关重要,不能被计算机完全取代。

A few weeks ago, I stumbled across AI 2027, a forecast of near-term AI progress that predicts we'll reach AGI around 2027, and (in the worst-case scenario) human extinction by 2030.

My reaction to it was similar to my response to other AGI timelines I’ve read: that it was somewhere between science fiction and delusion.

But as I dug deeper into the project, I learned something interesting about its lead author, Daniel Kokotajlo. It turns out he previously wrote a similar post that correctly predicted the major trends leading to ChatGPT, GPT-4o, and o1 - nearly four years ago.

With that in mind, I started to take AI 2027 more seriously, but I also continued to struggle with my credulity. I have a hard time believing that we’re going to automate 90% of jobs and cure all diseases by the next Presidential election. And yet, there are many, many smart people who seem to believe this (or at least some close variant of it).

So, with that in mind, I wanted to finally articulate precisely why I’m skeptical of not just AI 2027 but nearly all AGI timelines that I’ve encountered.

I don’t know whether I’m right - sharper minds than mine have spent far more hours pondering this - but I'd like to finally get my arguments down on paper, for posterity if nothing else.

Subscribe now

What is AI 2027?

For a bit of background, let’s recap the main ideas in AI 2027. The research project presents a detailed, month-by-month scenario forecasting the development of AGI. It lays out a series of specific technical milestones and their corresponding societal impacts, ultimately presenting two divergent futures - a "green" ending and a "red" ending.

The document resembles other AGI write-ups like Leopold Aschenbrenner's "Situational Awareness," which focused on measuring AI progress through orders of magnitude (OOMs) advances in data centers, chip development, and algorithmic breakthroughs. Like other AI safety discussions, it attempts to quantify existential risk, though it goes farther than most in providing specific predictions about AI capabilities and deployment.

If you haven't seen it, it's certainly a fascinating read. Some of the major milestones mentioned include:

There's much, much more in the full forecast, but I also want to touch on the two endings. In the slowdown (green) ending, humanity successfully coordinates a slowdown in AI development, and AI systems remain under human control. The US and China broker a deal on mutually aligned AGI, and by the 2028 presidential election, we're ushering in a new AI era - by 2029, we've developed "fusion power, quantum computers, and cures for many diseases."

In the race ("red") ending, competitive pressures between the US and China drive rapid AI development without adequate safety measures. We create ever more intelligent AGIs, each as misaligned as the last (though so does China, and eventually our AGI and China's AGI negotiate their own agreement). Each country invests more and more into new robotics factories and labs to keep up with the arms race - by late 2028, these factories are creating "a million new robots per month". And by 2030, our AGI overlord finally ends the human race in its quest to become ever more advanced.

What Kokotajlo Got Right

Like I mentioned, I was initially skeptical of AI 2027's predictions. But there was something that made me - at least partially - change my mind about how much weight to give it: the lead author, Daniel Kokotajlo.

In August 2021 (ChatGPT came out in November 2022, if you're counting at home), he wrote a very similar blog post titled "What 2026 Looks Like." In it, he outlined his predictions for the next five years of AI research.

Here are some of the things he got right:

To be clear: he also got several things wrong, and a number other predictions have yet to materialize. And yet - these predictions were laid out over a year before ChatGPT even existed! From someone who, as far as I can tell, wasn't even an AI researcher at the time[1].

To quote Scott Alexander (who would come to collaborate on AI 2027):

If you read Daniel's [2021] blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years.

I wasn't the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. "What 2026 Looks Like" promised a sequel about 2027 and beyond, but it never materialized.

It’s because of this impressive track record that I think it's worth considering what AI 2027 likely gets right. And this time around, he's partnered with forecasting experts and is coming to the table after a stint behind the scenes at OpenAI.

With that in mind, I think several short-term predictions in AI 2027 seem likely based on current trends:

The specificity of Kokotajlo's predictions is also commendable. Few AI analysts will make concrete forecasts over multi-year timescales, preferring vague trends to testable claims. I very much appreciate the courage this requires - the best I've done is predict a year in advance, with much higher-level analysis than AI 2027.

My Problem(s) with AGI Timelines

But despite some of the strengths of AI 2027, I can't help but be skeptical of its long-term predictions, for the same reasons that I feel other AGI timelines fall apart. Yes, I understand that humans are incredibly bad at appreciating exponential curves. And yes, I do think we're on a very rapid curve when it comes to model benchmark performance.

But there are three areas of disconnect that I have yet to be convinced about:

    Reality has a surprising amount of detail

    Models are not the same as products

    Decisions are made by people, not computers

Let's go through each one.

Read more

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 2027 AGI 人工智能 预测
相关文章