Published on July 16, 2025 7:25 PM GMT
For quite a while, I've been quite confused why (sweet nonexistent God, whyyyyy) so many[1] people intuitively believe that any risk of a genocide of some ethnicity is unacceptable while being… at best lukewarm against the idea of humanity going extinct.
I went through several hypotheses, but they all felt not quite right. Yes, people are not taking extinction risks as seriously as risks of genocide - but even when accounting for that, there still seems to be an unexplained gap here. Even when you explain that everyone dying means them, their family, their friends… they would express preference not to, but no conviction in that direction.[2] Or at least that's how I interpret the blasé attitude towards human extinction combined with visible anger/disgust at the idea of genocide.
Recently, I've realized that there is a decent explanation for why so many people believe that - if we model them as operating under a strict zero-sum game model of the world… ‘everyone loses’ is basically an incoherent statement - as a best approximation it would either denote no change and therefore be morally neutral, or an equal outcome, and would therefore be preferable to some.
Of course, I'm not proposing that those people are explicitly believing that the world is best modeled as a zero-sum game. No, we shouldn't expect random Joe to even know what a zero-sum game is. Rather, we should notice that many things in human experience throughout times are zero-sum, so it should be no surprise that some people have those expectations baked-in into their intuitions.
Once I thought of this, I started seeing this maladaptation everywhere. From naive redistribution schemes[3] (“No, you don't understand, we have to stop making the rich richer! I don't care if there will be less stuff made to go around, as long as the rich get poorer it'll be fine.”), excuses for corruption (“We finally have power, of course we have to enrich ourselves and our supporters, that's the whole point of gaining power!”), to the admittedly weaker connection to mistake vs conflict theorists (if you believe the whole world works as a zero-sum game, you would reasonably expect nearly every failure to be enemy action)
Assuming this is the root cause, it does present an interesting question, though. Would those people internalise the danger of omnicide, were the causes of said omnicide anthropomorphised? I guess it is not that easy for x-risks along the lines of asteroid impact (“Angry big asteroid is gonna willfully murder us all!”), but for AI this should be quite a bit simpler.
To be clear, I'm not advocating for doing that, but frankly, people are gonna anthropomorphize AI on their own, anyway. Which leads me to the prediction that if all of this is true, median objections will at some point shift from “Humanity going extinct? Meh, I have more important political issues to discuss!” to “That would be bad, but AI is my best friend and would never do that! Unless someone sabotages it somehow!”.
- ^
I have no statistical data on the prevalence of this perspective. It certainly feels like a significant chunk of the populace, but that's an anecdote. So yes, this whole thing is based on vibes, not data, sorry.
- ^
Where by preference I mean one arising from purely system 2 thinking about morality, while conviction arises from system 1 gut feeling.
- ^
I’m not against redistribution in general - quite the opposite! I do believe that redistribution that sharply decreases the overall amount of wealth is bad, though.
Discuss