少点错误 03月25日
Populectomy.ai
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能(AI)发展可能带来的大规模人口减少的潜在风险,并提出了“Populectomy”这一概念来描述这一结果。文章借鉴了“Gradual Disempowerment”论文的核心观点,认为当人类不再是社会系统必需时,人类的地位将受到威胁。作者进一步阐述了在AI技术发展背景下,人类合作模式的脆弱性,以及可能导致人口大规模减少的风险。文章呼吁抵抗AI的过度发展,并提倡通过文化和规范的抵抗来引导社会走向更美好的未来。

💡文章的核心观点是,随着AI的发展,人类可能不再是社会系统中的必要组成部分,这可能导致大规模的人口减少。

🤔作者提出了“Populectomy”这一概念,用来描述AI发展可能带来的结果,即人口的大规模减少,并认为这种结果源于人类合作模式的瓦解。

⚠️文章认为,人类文明的合作依赖于人类之间的相互依赖,而当AI能够独立完成人类的任务时,这种依赖关系将不复存在,从而导致人类的“disempowerment”。

🌍文章呼吁人们警惕AI发展的潜在风险,并提倡通过文化和规范的抵抗来引导社会走向更美好的未来,强调抵抗AI发展的重要性,如同“我们的生命取决于它”一样。

Published on March 24, 2025 10:06 PM GMT

A long-form iteration of my "AI will lead to massive violent population reduction" argument: https://populectomy.ai

The host name, populectomy, is my attempt at naming the described outcome, a name that I hope to be workable (sufficiently evocative and pithy, without being glib). Otherwise I'm out 150 USD for the domain registration, ai domains come at a premium.

I've mimicked the paper-as-website model with <bad-outcome>.ai domain name used by @Jan_Kulveit , @Raymond D  @Nora_Ammann , @Deger Turan , David Krueger, and @David Duvenaud . Mimicry being the highest form of flattery and what-not. Nice serif font styling like theirs is on my wish list.

Here's my previous post on the topic.

A few words may shortcut reading the whole thing, especially if you've read the previous post:

    In "Shell Games and Flinches" @Jan_Kulveit provides a "shortest useful summary" to the Gradual Disemplowerment paper's core argument: 

    "To the extent human civilization is human-aligned, most of the reason for the alignment is that humans are extremely useful to various social systems like the economy, and states, or as substrate of cultural evolution. When human cognition ceases to be useful, we should expect these systems to become less aligned, leading to human disempowerment."

    I basically agree with that statement. However, I think it is effectively trumped by this one, the shortest useful summary of Populectomy: Human civilization is a system of large-scale human cooperation made possible by the fact that killing many humans requires many other willing human collaborators who don't want to themselves be killed, making cooperation better than elimination. When human allies ceases to be necessary for the elimination of human rivals, we should expect (mass) human civilization to cease.

    The conceit of humanity as a shared project is very useful for maintaining human cooperation. However, I think it encourages a blind spot when big questions about the effects of new technology are framed as "what will this do to humanity?" Killing being a form of disempowerment, I agree that most humans will be disempowered. And yet, since it is the kind of disempowerment that takes them off the board, the end result is not a humanity that is disempowered.I feel the shared humanity conceit, as emphasized by EA-style universalism, creates some awkwardness around how we define "bad outcomes." I.e., what if we were forced to choose between AI killing all humans and a few humans killing all the others (and then happily living ever after)? Fortunately, I don't think arguments about the relative expected value of either outcome are necessary or helpful.Where "happily ever after" actually does matter is the essay's claim that the better human future has a very low population, with a carefully selected set of life-improving technology. This future could possibly be achieved in a managed, non-violent way (I propose an "if a life-ending asteroid were on its way and a New Earth were reachable by a small number of refugees, how would we organize ourselves to ensure the continuation of the species?" thought experiment). Between the risks of catastrophic misalignment, gradual disempowerment, and populectomy, there's an overwhelming case to resist AI development (as if our lives depend on it), and to steer to a better path.The democracy-dissolving character of AI identified in Gradual Disempowerment helps clarify that one ought not to put much hope or faith in democratic processes and policies. The better option may be low-coordination normative and cultural resistance.


Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 人口减少 Populectomy Gradual Disempowerment 科技风险
相关文章