少点错误 2024年10月07日
A Narrow Path: a plan to deal with AI extinction risk
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章提出应对 AI 灭绝风险的综合计划《A Narrow Path》,指出需满足的具体条件并提供相关政策。强调在这一复杂问题上需不断迭代更好答案,以实现解决问题的方案。还提到为实现目标,为政策制定者制定了分三个阶段的行动建议。

🎯AI 灭绝风险是重大问题,目前人类正处于威胁自身灭绝的危险道路上,该文档努力全面概述脱离此危险路径的所需措施,包括新机构、立法和政策等,以防止开发无法控制的 AI,若正确执行,这些措施应能在未来 20 年内阻止任何人开发人工超级智能。

🛡️为确保控制 AI 发展的措施不会因地缘政治竞争或国家和非国家行为者的不正当开发而崩溃,需建立国际机构,若正确执行,这些措施应能确保稳定性,并促成一个不会随时间崩溃的国际 AI 监督系统。

🎉在阻止不正当的超级智能发展并建立稳定的国际系统后,人类可专注于在人类控制下的变革性 AI 的科学基础,建立强大的智能科学和计量学、安全设计的 AI 工程等,以实现人类控制下的变革性 AI。

Published on October 7, 2024 1:02 PM GMT

We have published A Narrow Path: our best attempt to draw out a comprehensive plan to deal with AI extinction risk. We propose concrete conditions that must be satisfied for addressing AI extinction risk, and offer policies that enforce these conditions.

A Narrow Path answers the following: assuming extinction risk from AI, what would be a response that actually solves the problem for at least 20 years, and that leads to a stable global situation, one where the response is coordinated rather than unilaterally imposed with all the dangers that come from that.

Despite the magnitude of the problem, we have found no other plan that comprehensively tries to address the issue, so we made one.  

This is a complex problem where no one has a full solution, but we need to iterate on better answers if we are to succeed at implementing solutions that directly address the problem.

Executive summary below, full plan at www.narrowpath.co , and thread on X here.

We do not know how to control AI vastly more powerful than us. Should attempts to build superintelligence succeed, this would risk our extinction as a species. But humanity can choose a different future: there is a narrow path through.

A new and ambitious future lies beyond a narrow path. A future driven by human advancement and technological progress. One where humanity fulfills the dreams and aspirations of our ancestors to end disease and extreme poverty, achieves virtually limitless energy, lives longer and healthier lives, and travels the cosmos. That future requires us to be in control of that which we create, including AI.

We are currently on an unmanaged and uncontrolled path towards the creation of AI that threatens the extinction of humanity. This document is our effort to comprehensively outline what is needed to step off that dangerous path and tread an alternate path for humanity.

To achieve these goals, we have developed proposals intended for action by policymakers, split into three Phases:

Phase 0: Safety - New institutions, legislation, and policies that countries should implement immediately that prevent development of AI that we do not have control of. With correct execution, the strength of these measures should prevent anyone from developing artificial superintelligence for the next 20 years.

Phase 1: Stability - International institutions that ensure measures to control the development of AI do not collapse under geopolitical rivalries or rogue development by state and non-state actors. With correct execution, these measures should ensure stability and lead to an international AI oversight system that does not collapse over time.

Phase 2: Flourishing - With the development of rogue superintelligence prevented and a stable international system in place, humanity can focus on the scientific foundations for transformative AI under human control. Build a robust science and metrology of intelligence, safe-by-design AI engineering, and other foundations for transformative AI under human control.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 灭绝风险 应对措施 国际机构 人类控制
相关文章