少点错误 05月05日 03:17
Why I am not a successionist
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文作者反对AI成功最大化效用后取代人类的观点。作者并非认为人类的价值定义是必要的,而是出于对同类的偏爱,这种偏爱源于生物本能。作者认为,逐渐的人类改良是可以接受的,类似于我们从猿到人的进化,而非被更智能的AI取代。作者担忧AI安全社区中,很多人赞同用足够先进的AI取代人类。作者也提到,如果人类灭绝不可避免,那么创造一个有价值的AI继任者是值得的,但不能因此认为AI继任者“更好”而主动选择它。作者还探讨了人类上传到电脑中进行模拟进化,以及基因编辑等思想实验,但认为现实中风险过高,不如相信生物进化。

🧬作者反对AI取代人类,并非基于人类价值的定义,而是源于生物本能的对同类偏爱,就像偏爱自己的家人一样,不希望他们被更优秀的存在取代。

💡作者认为,逐渐的人类改良是可以接受的,类似于人类从猿到人的进化过程,这是自然演变,而非被外力直接“替换”。

🤔作者对AI安全社区中,很多人赞同用足够先进和对齐的AI取代人类感到担忧,作者更希望人类的后代能够繁衍下去。

💻作者探讨了将人类上传到电脑中进行模拟进化的思想实验,但强调模拟必须真实,且每个中间状态都应被充分模拟和珍视。

🧬作者还提到了基因编辑的可能性,但认为在能够认真反思和认可每一步的前提下才可能接受,否则风险过高。

Published on May 4, 2025 7:08 PM GMT

Utilitarianism implies that if we build an AI that successfully maximizes utility/value, we should be ok with it replacing us. Sensible people add caveats related to how hard it’ll be to determine the correct definition of value or check whether the AI is truly optimizing it.

As someone who often passionately rants against the AI successionist line of thinking, the most common objection I hear is "why is your definition of value so arbitrary as to stipulate that biological meat-humans are necessary." This is missing the crux—I agree such a definition of moral value would be hard to justify.

Instead, my opposition to AI successionism comes from a preference toward my own kind. This is hardwired in me from biology. I prefer my family members to randomly-sampled people with similar traits. I would certainly not elect to sterilize or kill my family members so that they could be replaced with smarter, kinder, happier people. The problem with successionist philosophies is that they deny this preference altogether. It’s not as if they are saying "the end to humanity is completely inevitable, at least these other AI beings will continue existing," which I would understand. Instead, they are saying we should be happy with and choose the path of human extinction and replacement with "superior" beings.

That said, there’s an extremely gradual version of human improvement that I think is acceptable, if each generation endorses and comes of the next and is not being "replaced" at any particular instant. This is akin to our evolution from chimps and is a different kind of process from if the chimps were raising llamas for meat, the llamas eventually became really smart and morally good, peacefully sterilized the chimps, and took over the planet.

Luckily I think AI X-risk is low in absolute terms but if this were not the case I would be very concerned about how a large fraction of the AI safety and alignment community endorses humanity being replaced by a sufficiently aligned and advanced AI, and would prefer this to a future where our actual descendants spread over the planets, albeit at a slower pace and with fewer total objective "utils". I agree that if human extinction is near-inevitable it’s worth trying to build a worthy AI successor, but my impression is that many think the AI successor can be actually "better" such that we should choose it, which is what I’m disavowing here.

Some people have noted that if I endorse chimps evolving into humans, I should endorse an accurate but much faster simulation of this process. That is, if me and my family were uploaded to a computer and our existences and evolution simulated at enormous speed, I should be ok with our descendants coming out of the simulation and repopulating the world. Of course this is very far from what most AI alignment researchers are thinking of building, but indeed if I thought there were definitely no bugs in the simulation, and that the uploads were veritable representations of us living equally-real lives at a faster absolute speed but equivalent in clock-cycles/FLOPs, perhaps this would be fine. Importantly, I value every intermediate organism in this chain, i.e. I value my children independently from their capacity to produce grandchildren. And so for this to work, their existence would have to be simulated fully.

Another interesting thought experiment is whether I would support gene-by-gene editing myself into my great-great-great-…-grandchild. Here, I am genuinely uncertain, but I think maybe yes, under the conditions of being able to seriously reflect on and endorse each step. In reality I don't think simulating such a process is at all realistic, or related to how actual AI systems are going to be built, but it's an interesting thought experiment. 

We already have a reliable improvement + reflection process provided by biology and evolution and so unless it’s necessarily doomed, I believe the risk of messing up is too high to seek a better, faster version.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI伦理 人类未来 AI安全 价值偏好 生物本能
相关文章