Published on May 21, 2025 2:46 PM GMT
Yes, this topic has been discussed multiple times. But at this point, long enough ago that my choices for joining the conversation are [Necro] or [New post], so here we are. (You can jump to the 2nd to last paragraph if you want to just see my conclusion.)
Let me start by immediately copping out of answering my own question:
I have no idea why rationalists aren't winning. Heck, I don't even know that they aren't winning, though I assume that one of their goals would be to spread the rationalist community, so I do have to downgrade the probability from my prior. But to answer the question of why tons of people I've never met or even interacted with aren't fulfilling utility curves I don't know, I would need to be a superintelligence or just plain epistemically irresponsible.
So, I'll substitute an easier question of "Why am I, an aspiring rationalist, not winning?" and then hope that either the reasons will happen to be similar or it can spark others to answer the same question and help assemble the mosaic of an answer to the big question.
Why am I not winning?
This is still a bad question. It has the form of "Why have I not stopped beating my wife" and also presumes there is something to explain and that "me not winning" isn't the default.
Am I winning? Is that unexpected in my model?
Ok, better - now we can start. Am I winning - well, yes and no. Life has many aspects. I win at pickleball, but we probably don't care. So let's oversimplify life into financial/technical and social/relational to make it easier to talk about.
Area Winning?
Financial/Technical Yes
Social/Relational No
Ok, so now that we've condensed my life into 2 bits of info, we can ask the next question: does my model predict that?
I don't need to tell you about myself to tell you my model results, but you might want to try your own model out, so I'll give you info that might be inputs in your model. Feel free to skip if you don't care to try your own model on me.
ASL: 37/M/USA
Race: Caucasian
Disabilities: None
Education: Masters + Professional Certification
Education type: public
Parental Education: PhD (Father) and Masters (Mother)
Parental Income Level: Upper middle class
IQ: Unknown, but scored 98-99th percentile on standardized tests in school
EQ: Unknown and is this even actually a thing? I think I read that it disappeared when g was accounted for.
Religion: Agnostic (Previously Christian)
History as rationalist: Became aware of cognitive bias in 2012. Lead to doubt of human testimony as source of knowledge and human ability to know truth on controversial issues which lead to losing my faith (which was the biggest or 2nd biggest part of my identity up to that time). Agnosticism and lack of epistemic closure drove repeated investigations into bias and whether I could see or overcome my own biases.
First heard of LW a few years later when a friend shared the double-crux method with me. Have occassionally read LW articles since then. Became convinced by an independent blog post that AI X-risk was a real concern around 2017. Became interested in how other people think and in psychological wellbeing for myself around 2020, and began periodic mindfulness meditation. Read Thinking Fast and Slow in 2023. Read Rationality: AI to Zombies this year (2025).
Personality: Generally, spend a lot of time thinking/analyzing. Meta strategy of engine-building, self-improvement, recursing back to the problem that precedes my current problem. Analytical and not decisive. Middle popularity in middle/high school. Good sense of humor and decent sense of self kept me from getting picked on, but wasn't part of the 'in' crowd. Had a friend clique, was most in the band/drama geek crowd (and also the church crowd) but freely associate with other crowds. Decent athlete until dual-enrollment got in the way Junior year. Would be described by others as: intelligent, intense, book-smart, funny, a good listener, debatey, honest, stable, principled, a bit proud, good at social analysis, bad at social perception.
Neurodivergence: Probably something like undiagnosed ADD. Use caffeine to help focus and have difficulty focusing on mundane tasks but hyperfocus on complex ones. Have time-blindness. Have some traits in common with ASD (low social perceptivity, literalness, strong desire to talk about interests) but others that don't fit (enjoy new places, enjoy meeting new people, enjoy new foods/textures/experiences, good ability to understand emotions when I am paying attention).
Since I've been working on rationality in some way or another since 2012, my model splits predictions.
My model predictions:
Area Winning pre-2012? Winning now?
Financial/Technical Yes Yes
Social/Relational No Yes
And the real results:
Area Winning pre-2012? Winning now?
Financial/Technical Yes Yes
Social/Relational No No
Ok, so 3/4 correct. My model is pretty simple: predict initial winning will follow personality description, but that later winning is a product of intelligence + commitment to rationality + commitment to self-improvement.
So why did my model fail to predict the lack of social/relational winning now?
Possibilities:
- I misreported the real result: Somewhat plausible. Moving the goalposts is a documented phenomenon and I tend to focus on where there are issues instead of what is going well. Still, a raw comparison puts my close friendships/family at slightly better than before, romance at the same, and acquaintances/social circle at significantly worse - so hard to classify that as a win.I got the direction wrong: This does count as evidence against the idea that intelligence+rationality+self-improvement = success, but my prior is pretty high, so I still think that's more likely true than not.I got the effect size or timing wrong: Very plausible. I have overestimated effect sizes multiple times in the past in various areas.
Ok, so if I got the effect size/timing wrong - why?
Here's my current best hypothesis.
System 1 = Fast intuitive gut thinking: everything that doesn't require intentional attention.
System 2 = Slow, methodical logical thinking: everything that does require intentional attention.
Rationality and self-improvement techniques are learned with system 2. Financial/Technical tasks are handled by system 2. Better system 2 -> better performance on financial/technical tasks (at least if those are areas you learned techniques for).
Relationship and social interaction are handled by system 1. System 2 is not fast enough, so trying to do relationships on system 2 is hard mode and has a low ceiling for how much success you can get. Better system 2 != Better performance at system 1 tasks.
System 2 has two ways to help: 1) Train system 1 to be better. But brains are complex and hard to intentionally change, so this can be a very slow process. 2) Take over when things start going badly to limit damage - works, but ironically, can decrease average relationship quality by stopping the natural pruning process where relationships with bad dynamics go badly enough to end. Best for close friends/family where relationship will continue anyway.
This hypothesis has the advantage of correctly predicting that my close friends/family relationships would improve more than others. I did know the result before making the model, so it's not technically a good test, but I at least wasn't consciously thinking about it and didn't realize that this prediction would match until after creating the model.
So that's what I have to contribute to the discussion: My datapoint and the hypothesis that the reason for not winning is that the knowledge resides in a different part of the brain. Does that hypothesis fit your data?
Discuss