少点错误 05月21日 22:52
Why Aren't Rationalists Winning (Again)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文作者以自身经历为出发点,探讨了理性主义者为何未能完全获得成功的问题。作者将生活简化为财务/技术和社会/关系两个方面,发现自己在财务/技术上是成功的,但在社会/关系方面却不尽如人意。作者分析了自己的模型预测与实际结果的差异,并提出了一个假设:理性和自我提升技巧主要通过系统2(慢速、有意识的逻辑思维)学习,而人际关系和社会互动主要由系统1(快速、直觉的思维)处理。因此,提升系统2的能力并不一定能改善系统1的表现,从而导致理性主义者在社会关系方面难以取得显著进展。

🧠作者将生活简化为财务/技术和社会/关系两个方面,发现自己在财务/技术上是成功的,但在社会/关系方面却不尽如人意。这促使他进一步探究原因,而非简单地将“未完全成功”视为理所当然。

🤔作者提出了一个核心假设:理性思维和自我提升技巧主要通过系统2(慢速、有意识的逻辑思维)学习,而人际关系和社会互动主要由系统1(快速、直觉的思维)处理。因此,擅长理性分析(系统2)并不一定能提升社交能力(系统1)。

🤝系统2可以通过两种方式帮助改善人际关系:一是训练系统1,但这是一个缓慢的过程;二是当情况变糟时介入以限制损害,但这可能会阻止不良关系的自然淘汰,从而降低平均关系质量。这种方法更适合亲密的朋友和家人,因为这些关系无论如何都会继续。

📊作者通过对比自身在2012年前后,于财务/技术和社会/关系方面的“成功”情况,来验证自己的模型预测。结果显示,模型在预测财务/技术方面的成功上是准确的,但在预测社会/关系方面则出现了偏差。

Published on May 21, 2025 2:46 PM GMT

Yes, this topic has been discussed multiple times.  But at this point, long enough ago that my choices for joining the conversation are [Necro] or [New post], so here we are.  (You can jump to the 2nd to last paragraph if you want to just see my conclusion.)


Let me start by immediately copping out of answering my own question: 

I have no idea why rationalists aren't winning.  Heck, I don't even know that they aren't winning, though I assume that one of their goals would be to spread the rationalist community, so I do have to downgrade the probability from my prior.  But to answer the question of why tons of people I've never met or even interacted with aren't fulfilling utility curves I don't know, I would need to be a superintelligence or just plain epistemically irresponsible.
 

So, I'll substitute an easier question of "Why am I, an aspiring rationalist, not winning?" and then hope that either the reasons will happen to be similar or it can spark others to answer the same question and help assemble the mosaic of an answer to the big question.

Why am I not winning?
This is still a bad question.  It has the form of "Why have I not stopped beating my wife" and also presumes there is something to explain and that "me not winning" isn't the default.

Am I winning?  Is that unexpected in my model?
Ok, better - now we can start.  Am I winning - well, yes and no.  Life has many aspects.  I win at pickleball, but we probably don't care.  So let's oversimplify life into financial/technical and social/relational to make it easier to talk about.
Area                                  Winning?
Financial/Technical        Yes
Social/Relational             No

Ok, so now that we've condensed my life into 2 bits of info, we can ask the next question: does my model predict that?

I don't need to tell you about myself to tell you my model results, but you might want to try your own model out, so I'll give you info that might be inputs in your model. Feel free to skip if you don't care to try your own model on me.

ASL: 37/M/USA
Race: Caucasian
Disabilities: None
Education: Masters + Professional Certification
Education type: public
Parental Education: PhD (Father) and Masters (Mother)
Parental Income Level: Upper middle class
IQ: Unknown, but scored 98-99th percentile on standardized tests in school
EQ: Unknown and is this even actually a thing?  I think I read that it disappeared when g was accounted for.
Religion: Agnostic (Previously Christian)
History as rationalist: Became aware of cognitive bias in 2012.  Lead to doubt of human testimony as source of knowledge and human ability to know truth on controversial issues which lead to losing my faith (which was the biggest or 2nd biggest part of my identity up to that time).  Agnosticism and lack of epistemic closure drove repeated investigations into bias and whether I could see or overcome my own biases.
First heard of LW a few years later when a friend shared the double-crux method with me.  Have occassionally read LW articles since then.  Became convinced by an independent blog post that AI X-risk was a real concern around 2017.  Became interested in how other people think and in psychological wellbeing for myself around 2020, and began periodic mindfulness meditation.  Read Thinking Fast and Slow in 2023.  Read Rationality: AI to Zombies this year (2025).  
Personality: Generally, spend a lot of time thinking/analyzing.  Meta strategy of engine-building, self-improvement, recursing back to the problem that precedes my current problem.  Analytical and not decisive.  Middle popularity in middle/high school.  Good sense of humor and decent sense of self kept me from getting picked on, but wasn't part of the 'in' crowd.  Had a friend clique, was most in the band/drama geek crowd (and also the church crowd) but freely associate with other crowds.  Decent athlete until dual-enrollment got in the way Junior year.  Would be described by others as: intelligent, intense, book-smart, funny, a good listener, debatey, honest, stable, principled, a bit proud, good at social analysis, bad at social perception.
Neurodivergence: Probably something like undiagnosed ADD.  Use caffeine to help focus and have difficulty focusing on mundane tasks but hyperfocus on complex ones.  Have time-blindness.  Have some traits in common with ASD (low social perceptivity, literalness, strong desire to talk about interests) but others that don't fit (enjoy new places, enjoy meeting new people, enjoy new foods/textures/experiences, good ability to understand emotions when I am paying attention).  

Since I've been working on rationality in some way or another since 2012, my model splits predictions.
My model predictions:
Area                               Winning pre-2012?        Winning now?
Financial/Technical     Yes                                    Yes
Social/Relational          No                                    Yes

And the real results:
Area                                Winning pre-2012?        Winning now?
Financial/Technical      Yes                                    Yes
Social/Relational           No                                     No

Ok, so 3/4 correct.  My model is pretty simple: predict initial winning will follow personality description, but that later winning is a product of intelligence + commitment to rationality + commitment to self-improvement.
So why did my model fail to predict the lack of social/relational winning now?
Possibilities:

    I misreported the real result: Somewhat plausible.  Moving the goalposts is a documented phenomenon and I tend to focus on where there are issues instead of what is going well.  Still, a raw comparison puts my close friendships/family at slightly better than before, romance at the same, and acquaintances/social circle at significantly worse - so hard to classify that as a win.I got the direction wrong: This does count as evidence against the idea that intelligence+rationality+self-improvement = success, but my prior is pretty high, so I still think that's more likely true than not.I got the effect size or timing wrong: Very plausible.  I have overestimated effect sizes multiple times in the past in various areas.

Ok, so if I got the effect size/timing wrong - why?
Here's my current best hypothesis.  
System 1 = Fast intuitive gut thinking: everything that doesn't require intentional attention.
System 2 = Slow, methodical logical thinking: everything that does require intentional attention.

Rationality and self-improvement techniques are learned with system 2.  Financial/Technical tasks are handled by system 2.  Better system 2 -> better performance on financial/technical tasks (at least if those are areas you learned techniques for).
Relationship and social interaction are handled by system 1.  System 2 is not fast enough, so trying to do relationships on system 2 is hard mode and has a low ceiling for how much success you can get.  Better system 2 != Better performance at system 1 tasks.
System 2 has two ways to help: 1) Train system 1 to be better.  But brains are complex and hard to intentionally change, so this can be a very slow process.  2) Take over when things start going badly to limit damage - works, but ironically, can decrease average relationship quality by stopping the natural pruning process where relationships with bad dynamics go badly enough to end.  Best for close friends/family where relationship will continue anyway.
This hypothesis has the advantage of correctly predicting that my close friends/family relationships would improve more than others.  I did know the result before making the model, so it's not technically a good test, but I at least wasn't consciously thinking about it and didn't realize that this prediction would match until after creating the model.  

So that's what I have to contribute to the discussion: My datapoint and the hypothesis that the reason for not winning is that the knowledge resides in a different part of the brain.  Does that hypothesis fit your data?



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

理性主义 系统1 系统2 人际关系
相关文章