少点错误 07月14日 06:47
Why is LW not about winning?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了理性在解决问题中的两种不同策略:认知理性(系统化地提升认知能力)和胜算理性(注重实际成果)。作者认为,LessWrong社区主要侧重于认知理性,但在解决如AI对齐等复杂问题时,胜算理性可能更为有效。文章通过对比两种策略,强调了在追求目标时,应综合考虑资源获取、团队合作等多种手段,而非仅依赖直接研究。作者呼吁社区关注“胜算”策略,以更高效地解决实际问题。

🤔 作者观察到LessWrong社区在解决问题时,更侧重于认知理性,关注认知偏差、算法和知识论,而较少关注“胜算”。

💡 作者认为,对于解决AI对齐等目标,仅依靠认知理性(例如深入研究)可能效率较低,应考虑“胜算”策略,即通过获取资源、组建团队等方式来提升效率。

💪 作者强调,在追求目标时,应综合运用认知理性与胜算理性。胜算理性更侧重于行动力和资源调配,例如雇佣更多研究人员,以获得更大的影响力。

Published on July 13, 2025 10:36 PM GMT

This is a bit of a rant but I notice that I am confused.

Eliezer said in the original Sequences:

Rationality is Systematized Winning

But it's pretty obvious that LessWrong is not about winning (and Eliezer provides a more accurate definition of what he means by rationality here). As far as I can tell LW is mostly about cognitive biases and algorithms/epistemology (the topic of Eliezer's sequences), self-help, and a lot of AI alignment.

But LW should be about winning! LW has the important goal of solving alignment, so it should care a lot about the most efficient way to go about it, in other words about how to win, right?

So what would it look like if LW had a winning attitude towards alignment?

Well, I think this is where the distinction between the two styles of rationality (cognitive algorithm development VS winning) matters a lot. If you want to solve alignment and want to be efficient about it, it seems obvious that there are better strategies than researching the problem yourself, like don't spend 3+ years on a PhD (cognitive rationality) but instead get 10 other people to work on the issue (winning rationality). And that 10x s your efficiency already.

My point is that we should consider all strategies when solving a problem. Not only the ones that focus directly on the problem (cognitive rationality/researching alignment), but also the ones that involve acquiring a lot of resources and spending these to solve the problem (winning rationality/getting 10 other people to research alignment).

This is especially true when other strategies get you orders of magnitude more leverage on the problem. To pick an extreme example, who do you think has more capacity to solve alignment, Paul Christiano, or Elon Musk? (hint: Elon Musk can hire a lot of AI alignment researchers).

I am confused because LW teaches cognitive rationality so it should notice all that and recognize that epistemology and cognitive biases and a direct approach is not the most efficient way to go about alignment (or any ambitious goal), and start studying how people actually win in the real world.

But it's not happening (well, not much at least).

As far as I can tell cognitive rationality helps but winning seems to be mostly about agency and power really. So maybe LW should talk more about these (and how to use them for good)?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

理性 胜算 认知偏差 AI对齐
相关文章