少点错误 06月06日 03:17
Levels of Doom: Eutopia, Disempowerment, Extinction
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了“权力剥夺”在人类未来发展中的含义,以及它与“灭绝”和“乌托邦”的关系。文章指出,人们对“权力剥夺”的理解存在分歧,有人将其视为人类灭绝的先兆,也有人认为这并非坏事。作者认为,在构建理想未来时,应关注人类获得资源和自由,成为超人类的可能性,而不是仅仅关注人工智能的发展。文章还分析了Yudkowsky的观点,并讨论了在不同情境下,人们对“权力剥夺”的态度差异,强调了区分不同程度的“厄运”的重要性。

🤔“权力剥夺”的含义与解读:文章指出,“权力剥夺”常被误解为人类灭绝,或被认为是一种可以接受的状态。这种模糊性使得人们难以对未来发展达成共识。

💡乌托邦与“权力剥夺”:作者认为,理想的未来应允许人类获得资源和自由,发展成为超人类。这与仅由人工智能主导,人类只能扮演次要角色或无法自我提升的未来形成对比。

🧐Yudkowsky的观点与“权力剥夺”:Yudkowsky的观点侧重于人工智能可能导致人类灭绝的风险。作者认为,即使人工智能不导致灭绝,仅是“权力剥夺”也可能是一个严重的问题,需要引起重视。

🌍“厄运”的非中心谬误:文章讨论了人们在讨论未来时的不同关注点。有些人关注灭绝风险,而另一些人则更关注“权力剥夺”或乌托邦的实现。这种关注点的差异容易导致误解和争论。

Published on June 5, 2025 7:08 PM GMT

Disempowerment is on the fence, gets interpreted as either implying human extinction or being a good place. "Doom" tends to be ambiguous between disempowerment and extinction, as well as about when that outcome must be gauged. And many people currently feel both disempowered and OK, so see eutopia as similar to disempowerment, neither an example of "doom".

Arguments pointing to risk of human extinction run into the issue of people expecting disempowerment without extinction, when some of the same arguments would remain relevant if applied directly to disempowerment (including the moral arguments about extinction or disempowerment being a problem). And arguments pointing to desirability of establishing eutopia run into the issue of people expecting disempowerment to be approximately as good and in practice much more likely. When the distinctions between these levels of doom are not maintained, conflation makes it harder to meaningfully disagree.

Eutopia Without Disempowerment

This distinction might be slightly more murky, worth defining more explicitly. For me, a crux of a future that's good for humanity is giving the biological humans the resources and the freedom to become transhuman beings themselves, with no hard ceiling on relevance in the long run. Rather than AIs only letting some originally-humans to grow into more powerful but still purely ornamental roles, or not letting them grow at all, or not letting them think faster and do checkpointing and multiple instantiations of the mind states using a non-biological cognitive substrate, or letting them unwillingly die of old age or disease. This should only apply to those who so choose, under their own direction rather than only through externally imposed uplifting protocols, even if that leaves it no more straightforward than world-class success of some kind today, to reach a sensible outcome.

This in particular implies reasonable resources being left to those who remain/become regular biological humans (or take their time growing up), including through influence of some of these originally-human beings who happen to consider that a good thing to ensure.

Yudkowsky's Arguments and Disempowerment

Yudkowsky frames AGI ruin arguments around extinction, which his models predict. I think many of the same arguments survive in a world where some AIs have a minimal level of caring about humanity sufficient to preserve it in some form (at least for a while). Those arguments remain highly convincing, still suggesting a near-inevitability of disempowerment or worse in the mainline scenario. And so objections about extinction that don't simultaneously work as objections about disempowerment tend to miss the point, if disempowerment on its own is seen as a similarly significant problem.

Noncentral Fallacy of Doom

A more defensible motte of a position or worldview can be noncentral in the context of its bailey, an irrelevant technicality grudgingly admitted.

Someone worried about extinction (the bailey) may frame their arguments as being about existential risk, which is disempowerment or worse (the motte), feeling that it's still an undesirable outcome. But disempowerment won't be universally considered undesirable or meaningfully avoidable even in a good future. And so AI notkilleveryoneism being about extinction is a good fence to maintain.

A successionist hoping for superintelligence for its own sake (the bailey) may talk about it being a practical source of abundance and stability (the motte), feeling that more subtle issues of ending up with disempowerment rather than eutopia are not as important, and that even some risk of extinction is acceptable, but in any case not the point. Pointing out that extinction is undesirable would fall flat both as apparently missing the point and as something considered relatively unlikely.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

权力剥夺 人类未来 人工智能 乌托邦
相关文章