少点错误 2024年12月23日
What are the strongest arguments for very short timelines?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能领域关于通用人工智能(AGI)实现时间线的不同观点。一些人认为AGI在五年内就会实现,他们的理由多基于直觉、业内人士的看法或近期进展。然而,大多数机器学习研究者,包括在顶级AI会议上发表过论文的专家,认为AGI的实现还需要更长时间,甚至不确定现有方法是否能扩展到AGI。最近的调查也显示,持“五年内”观点的研究者仍是少数。文章指出,虽然大模型的进展令人瞩目,但关于现有模型的局限性以及如何克服这些局限性的讨论仍然不足,需要更深入的探讨。

⏱️ **短期AGI观点:** 一些人认为AGI会在五年内实现,但他们的论据多为直觉、业内人士的看法以及近期进展,缺乏对现有模型局限性的深入分析。

📊 **专家调查:** 2023年一项针对2778位AI研究者的调查显示,多数专家认为AGI的实现需要更长时间,中位数时间为23年或92年,具体取决于问题的措辞。在ICLR 2024研讨会上的调查中,持五年内实现AGI观点的研究者仅占16.6%。

🤔 **模型局限性:** 文章指出,虽然大型语言模型如GPT-4的发布引发了对AGI的乐观情绪,但缺乏对现有模型局限性的讨论,以及如何克服这些局限性的明确模型。

💡 **潜在突破:** 文章提到了Aschenbrenner的“情境感知”理论中关于“数据墙”和“解绑”的观点,认为这些是可能推动AGI实现的潜在方向,但仍需进一步研究。

Published on December 23, 2024 9:38 AM GMT

I'm seeing a lot of people on LW saying that they have very short timelines (say, five years or less) until AGI. However, the arguments that I've seen often seem to be just one of the following:

    "I'm not going to explain but I've thought about this a lot""People at companies like OpenAI, Anthropic etc. seem to believe this""Feels intuitive based on the progress we've made so far"

At the same time, it seems like this is not the majority view among ML researchers. The most recent representative expert survey that I'm aware of is the 2023 Expert Survey on Progress in AI. It surveyed 2,778 AI researchers who had published peer-reviewed research in the prior year in six top AI venues (NeurIPS, ICML, ICLR, AAAI, IJCAI, JMLR); the median time for a 50% chance of AGI was either in 23 or 92 years, depending on how the question was phrased.

While it has been a year since fall 2023 when this survey was conducted, my anecdotal impression is that many researchers not in the rationalist sphere still have significantly longer timelines, or do not believe that current methods would scale to AGI. 

A more recent, though less broadly representative, survey is reported in Feng et al. 2024, In the ICLR 2024 "How Far Are We From AGI" workshop, 138 researchers were polled on their view. "5 years or less" was again a clear minority position, with only 16.6% respondents. On the other hand, "20+ years" was the view held by 37% of the respondents.

Most recently, there were a number of "oh AGI does really seem close" comments with the release of o3. I mostly haven't seen these give very much of an actual model for their view either; they seem to mostly be of the "feels intuitive" type. There have been some posts discussing the extent to which we can continue to harness compute and data for training bigger models, but that says little about the ultimate limits of the current models.

The one argument that I did see that felt somewhat convincing were the "data wall" and "unhobbling" sections of the "From GPT-4 to AGI" chapter of Leopold Aschenbrenner's "Situational Awareness", that outlined ways in which we could build on top of the current paradigm. However, this too was limited to just "here are more things that we could do".

So, what are the strongest arguments for AGI being very close? I would be particularly interested in any discussions that explicitly look at the limitations of the current models and discuss how exactly people expect those to be overcome.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AGI 人工智能 机器学习 时间线 模型局限
相关文章