少点错误 06月01日 02:22
The best approaches for mitigating "the intelligence curse" (or gradual disempowerment); my quick guesses at the best object-level interventions
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了在人工智能时代,如何应对“智能诅咒”或“逐渐失去价值”的风险,即人类因其劳动不再有价值而逐渐被边缘化。作者提出了一系列针对性的干预措施,认为这些措施比其他常见干预措施更具优势。文章重点关注了通过强制互操作性、为个人配备对齐的AI代表、以及提高社会意识来应对这一挑战。作者还分析并否定了一些其他干预措施,并强调了解决技术性错位问题的重要性。

🔑 **强制互操作性:** 建议通过法规或规范,要求AI公司支持所有必要的API和接口,以便自定义其模型并尝试进行不同的对齐。第三方可以检查实施情况,或公司向第三方提交权重以实现API。这旨在降低权力集中,促进竞争。

🤖 **对齐AI代表:** 提议为每个人配备竞争且对齐的AI代表,为他们提供如何推进自身利益的建议,并直接代表他们追求利益。这有助于人们有效地进行策略性投资,并及早发现可能导致其失去权力的因素。

📢 **提高社会意识:** 强调提高社会意识的重要性,包括透明度、持续部署模型和能力展示。这有助于推动上述干预措施,并使人们在仍有权力时进行谈判。

🚫 **被否决的干预措施:** 作者对普遍地将AI扩散到经济中、开发协助人类的系统以及普遍提升人类能力持怀疑态度。同样,作者认为本地化AI能力和开源,以及“让AI+人类更具竞争力”的方案,都无法有效解决问题。

Published on May 31, 2025 6:20 PM GMT

There have recently been various proposals for mitigations to "the intelligence curse" or "gradual disempowerment"—concerns that most humans would end up disempowered (or even dying) because their labor is no longer valuable. I'm currently skeptical that the typically highlighted prioritization and interventions are best and I have some alternative proposals for relatively targeted/differential interventions which I think would be more leveraged (as in, the payoff is higher relative to the difficulty of achieving them).

It's worth noting I doubt that these threats would result in huge casualty counts (due to e.g. starvation) or disempowerment of all humans (though substantial concentration of power among a smaller group of humans seems quite plausible).[1] I decided to put a bit of time into writing up my thoughts out of general cooperativeness (e.g., I would want someone in a symmetric position to do the same).

(This was a timeboxed effort of ~1.5 hr, so apologies if it is somewhat poorly articulated or otherwise bad. Correspondingly, this post is substantially lower effort than my typical post.)

My top 3 preferred interventions focused on these concerns are:

Some things which help with above:

Implicit in my views is that the problem would be mostly resolved if people had aligned AI representatives which helped them wield their (current) power effectively.

To be clear, something like these interventions has been highlighted in prior work, but I have a somewhat different emphasis and prioritization and I'm explicitly deprioritizing other interventions.

Deprioritized interventions and why:

(I'm not discussing interventions targeting misalignment risk, biorisk, or power grab risk, as these aren't very specific to this threat model.)

Again, note that I'm not particularly recommending these interventions on my views about the most important risks, just claiming these are the best interventions if you're worried about "intelligence curse" / "gradual disempowerment" risks.

  1. ^

    That said, I do think that technical misalignment issues are pretty likely to disempower all humans and I think war, terrorism, or accidental release of homicidal bioweapons could kill many. That's why I focus on misalignment risks.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 智能诅咒 AI对齐 社会意识 技术变革
相关文章