少点错误 2024年10月31日
The Compendium, A full argument about extinction risk from AGI
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

《The Compendium》汇集了关于AGI竞赛模型及避免灾难的重要论点。指出AI发展迅速,AGI竞赛受多种因素推动,存在灭绝风险,当前的AI安全和治理努力存在问题,人类需反对这种竞赛并建立成熟的全球治理。

📌《The Compendium》旨在为非技术读者呈现关于AGI灭绝风险的连贯世界观,让读者了解当前形势、AGI竞赛及其存在的风险。AI发展受资源驱动,能力快速提升,正趋近构建AGI。

💻资源丰富、有意识形态动机的个体推动企业进行AGI竞赛,现得到大科技公司支持,未来还将有国家支持。随着人类任务被自动化,人工与人类智能差距缩小,AGI若实现将迅速发展至超级智能,对人类构成灭绝风险。

🚧与强大AI共存需解决人类面临的最困难问题,需诺贝尔奖级突破、巨额投资及多领域进步,但目前时间可能不够,且当前AI安全和治理努力不足以解决问题,该竞赛受特定信念推动,人类须反对并建立全球治理。

Published on October 31, 2024 12:01 PM GMT

We (Connor Leahy, Gabriel Alfour, Chris Scammell, Andrea Miotti, Adam Shimi) have just published The Compendium, which brings together in a single place the most important arguments that drive our models of the AGI race, and what we need to do to avoid catastrophe.

We felt that something like this has been missing from the AI conversation. Most of these points have been shared before, but a “comprehensive worldview” doc has been missing. We’ve tried our best to fill this gap, and welcome feedback and debate about the arguments. The Compendium is a living document, and we’ll keep updating it as we learn more and change our minds.

We would appreciate your feedback, whether or not you agree with us:

Here is the twitter thread and the summary:

The Compendium aims to present a coherent worldview about the extinction risks of artificial general intelligence (AGI), an artificial intelligence that exceeds that of humans, in a way that is accessible to non-technical readers who have no prior knowledge of AI. A reader should come away with an understanding of the current landscape, the race to AGI, and its existential stakes. 

AI progress is rapidly converging on building AGI, driven by a brute-force paradigm that is bottlenecked by resources, not insights. Well-resourced, ideologically motivated individuals are driving a corporate race to AGI. They are now backed by Big Tech, and will soon have the support of nations.

People debate whether or not it is possible to build AGI, but most of the discourse is rooted in pseudoscience. Because humanity lacks a formal theory of intelligence, we must operate by the empirical observation that AI capabilities are increasing rapidly, surpassing human benchmarks at an unprecedented pace. 

As more and more human tasks are automated, the gap between artificial and human intelligence shrinks. At the point when AI is able to do all of the tasks a human can on a computer, it will functionally be AGI and able to conduct the same AI research that we can. Should this happen, AGI will quickly scale to superintelligence, and then to levels so powerful that AI is best described as a god compared to humans. Just as humans have catalyzed the holocene extinction, these systems pose an extinction risk for humanity not because they are malicious, but because we will be powerless to control them as they reshape the world, indifferent to our fate. 

Coexisting with such powerful AI requires solving some of the most difficult problems that humanity has ever tackled, which demand Nobel-prize-level breakthroughs, billions or trillions of dollars of investment, and progress in fields that resist scientific understanding. We suspect that we do not have enough time to adequately address these challenges.

Current technical AI safety efforts are not on track to solve this problem, and current AI governance efforts are ill-equipped to stop the race to AGI. Many of these efforts have been co-opted by the very actors racing to AGI, who undermine regulatory efforts, cut corners on safety, and are increasingly stoking nation-state conflict in order to justify racing. 

This race is propelled by the belief that AI will bring extreme power to whoever builds it first, and that the primary quest of our era is to build this technology. To survive, humanity must oppose this ideology and the race to AGI, building global governance that is mature enough to develop technology conscientiously and justly. We are far from achieving this goal, but believe it to be possible. We need your help to get there.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

The Compendium AGI AI竞赛 灭绝风险 全球治理
相关文章