少点错误 前天 10:28
Severance and the Ethics of the Conscious Agents
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了人工智能(AI)伦理问题,特别是关于AI对齐的目标,以及我们当前的道德准则可能包含未来世代认为不道德的内容。文章讨论了Nick Bostrom的观点,包括连贯的、外推的伦理道德和模拟假设。作者提出了一个反驳观点,即我们的道德观可能会演变成反对创造大量的意识。文章还探讨了“意识”的定义问题,以及在解决这个问题后,我们可能会设计有或没有意识的智能体。最后,文章结合了电视剧《Severance》的情节,探讨了在同一身体中拥有不同意识是否符合伦理,以及在未来,创造有意识的智能体为我们工作是否会被视为不道德的行为。

🤔 Nick Bostrom 认为 AI 对齐的目标是实现连贯的、外推的伦理道德,以避免我们当前道德准则的固化,因为这些准则可能包含未来世代认为不道德的内容,就像我们看待过去世代的道德一样。

💡 Bostrom 提出了“模拟假设”,认为未来可能充斥着想要创建历史模拟的意识,这意味着我们很可能处于一个历史模拟中。

🧐 作者认为,我们的道德观可能会朝着反对创造大量意识的方向演变。例如,有效利他主义运动(EA)提倡避免对其他生物造成痛苦,即使它们的意识受到质疑。

🔨 文章探讨了强制劳动问题,即创造一个意识来为我们工作是否可以接受。作者预计,我们最终将解决“什么是意识”的问题,并能够确认一个实体是否具有意识。

🎬 电视剧《Severance》预示了围绕意识的道德发展。剧中,未能继续为人工创造的意识提供体验,即使身体和原始意识都存活,也常被认为是谋杀。退休多个“innie”(仅工作意识)的意识等同于大规模谋杀。

Published on April 21, 2025 2:21 AM GMT

Severance Spoilers!

Nick Bostrom talks about coherent, extrapolated ethics as the goal of AI alignment, specifically to avoid calcification from our current moral code, which likely contains many things future generations would find unethical, just as we would previous generations. Since reading that, I've been wondering what things we accept today might alter the trajectory of the future.

Another of Bostrom's conjectures is the Simulation Hypothesis, which posits that the future is likely awash with consciousness that wants to create historical simulations, which would include orders of magnitude more consciousnesses, which means we are probably a historical simulation consciousness.

My personal counterargument to this is that our morals are likely to evolve against the creation of large amounts of consciousness. The EA movement includes avoiding causing pain to other creatures, even when their consciousness is questionable ("Save the shrimp!"). Another facet of this is forced labor, or is it acceptable to create a consciousness to do work for you?

Eventually, I expect we will solve what consciousness "is" and be able to confirm if an entity is conscious or not. After that point, it's likely we'll be able to design agents with or without consciousness.. Having unconscious entities do our work may be less computationally expensive, which could be another motivator against conscious agents.

The Severance TV series, in which separate consciousnesses are housed within the same body, foreshadows the developing morality around consciousness. In the TV show, failing to continue to provide experiences for an artificially created consciousness, even when the body and original consciousness survive, is frequently labelled as murder. Retiring the consciousness of multiple "innies" (work-only consciousnesses) is tantamount to mass murder.

What do you think? Is it ethical now to make a conscious agent to do work for you, and then retire it afterwards? Will it be considered so in the future?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI伦理 意识 模拟假设 道德 Severance
相关文章