少点错误 03月31日
Why do many people who care about AI Safety not clearly endorse PauseAI?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在AI安全领域中,关于“暂停AI发展”的观点与实践。作者认为,如果减缓AI发展是广泛共识,那么支持PauseAI组织应成为公开行动;若暂停AI的呼声高于PauseAI,则应解决此问题。文章分析了AI安全领域内,支持暂停发展但未公开支持PauseAI的原因,并强调了构建“暂停按钮”的重要性,将其视为应对AI潜在风险的备选方案。作者以计算机科学学生的身份,结合对AI安全主要观点的了解,提出了对现状的疑问。

🤔 文章核心关注点在于AI安全领域内,关于“暂停AI发展”的观点与实践。

💡 作者认为,如果减缓AI发展是广泛共识,那么支持PauseAI组织应成为公开行动。

🧐 作者质疑,为何许多支持暂停AI发展的人士,并未公开支持PauseAI组织,并提出了可能的原因。

🔑 文章强调构建“暂停按钮”的重要性,将其视为应对AI潜在风险的备选方案。

🤔 作者认为,即使有更复杂的解决方案,暂停AI的方案仍是一个有用的基准,其他方案应优于暂停AI,且不应有比暂停AI更大的缺点。

Published on March 30, 2025 6:06 PM GMT

tl;dr:

From my current understanding, one of the following two things should be happening and I would like to understand why it doesn’t:

Either

    Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.

    Or

    If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.

 

Pausing AI

There does not seem to be a legible path to prevent possible existential risks from AI without slowing down its current progress.

 

I am aware that many people interested in AI Safety do not want to prevent AGI from being built EVER, mostly based on transhumanist or longtermist reasoning.

Many people in AI Safety seem to be on board with the goal of “pausing AI”, including, for example, Eliezer Yudkowsky and the Future of Life Institute. Neither of them is saying “support PauseAI!”. Why is that?

One possibility I could imagine: Could it be advantageous to hide “maybe we should slow down on AI” in the depths of your writing instead of shouting “Pause AI! Refer to [organization] to learn more!”?

 

Another possibility is that the majority opinion is actually something like “AI progress shouldn’t be slowed down” or “we can do better than lobbying for a pause” or something else I am missing. This would explain why people neither support PauseAI nor see this as a problem to be addressed.

Even if you believe there is a better, more complicated way out of AI existential risk, the pausing AI approach is still a useful baseline: Whatever your plan is, it should be better than pausing AI and it should not have bigger downsides than pausing AI has. There should be legible arguments and a broad consensus that your plan is better than pausing AI. Developing the ability to pause AI is also an important fallback option in case other approaches fail. PauseAI calls this “Building the Pause Button”:

Some argue that it’s too early to press the Pause Button (we don’t), but most experts seem to agree that it may be good to pause if developments go too fast. But as of now we do not have a Pause Button. So we should start thinking about how this would work, and how we can implement it.

 

Some info about myself: I'm a computer science student and familiar with the main arguments of AI Safety: I have read a lot of Eliezer Yudkowsky and did the AISF course reading and exercises. I have watched Robert Miles videos.

 

My conclusion is that either

    Everyone in AI Safety who thinks slowing down AI is currently broadly a good idea should publicly support PauseAI.

    Or

    If pausing AI is much more popular than the organization PauseAI, that is a problem that should be addressed in some way.

 

Why is (1) not happening and (2) not being worked on?

How much of a consensus is there on pausing AI?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 暂停AI PauseAI 风险评估
相关文章