Published on June 5, 2025 7:08 PM GMT
Disempowerment is on the fence, gets interpreted as either implying human extinction or being a good place. "Doom" tends to be ambiguous between disempowerment and extinction, as well as about when that outcome must be gauged. And many people currently feel both disempowered and OK, so see eutopia as similar to disempowerment, neither an example of "doom".
Arguments pointing to risk of human extinction run into the issue of people expecting disempowerment without extinction, when some of the same arguments would remain relevant if applied directly to disempowerment (including the moral arguments about extinction or disempowerment being a problem). And arguments pointing to desirability of establishing eutopia run into the issue of people expecting disempowerment to be approximately as good and in practice much more likely. When the distinctions between these levels of doom are not maintained, conflation makes it harder to meaningfully disagree.
Eutopia Without Disempowerment
This distinction might be slightly more murky, worth defining more explicitly. For me, a crux of a future that's good for humanity is giving the biological humans the resources and the freedom to become transhuman beings themselves, with no hard ceiling on relevance in the long run. Rather than AIs only letting some originally-humans to grow into more powerful but still purely ornamental roles, or not letting them grow at all, or not letting them think faster and do checkpointing and multiple instantiations of the mind states using a non-biological cognitive substrate, or letting them unwillingly die of old age or disease. This should only apply to those who so choose, under their own direction rather than only through externally imposed uplifting protocols, even if that leaves it no more straightforward than world-class success of some kind today, to reach a sensible outcome.
This in particular implies reasonable resources being left to those who remain/become regular biological humans (or take their time growing up), including through influence of some of these originally-human beings who happen to consider that a good thing to ensure.
Yudkowsky's Arguments and Disempowerment
Yudkowsky frames AGI ruin arguments around extinction, which his models predict. I think many of the same arguments survive in a world where some AIs have a minimal level of caring about humanity sufficient to preserve it in some form (at least for a while). Those arguments remain highly convincing, still suggesting a near-inevitability of disempowerment or worse in the mainline scenario. And so objections about extinction that don't simultaneously work as objections about disempowerment tend to miss the point, if disempowerment on its own is seen as a similarly significant problem.
Noncentral Fallacy of Doom
A more defensible motte of a position or worldview can be noncentral in the context of its bailey, an irrelevant technicality grudgingly admitted.
Someone worried about extinction (the bailey) may frame their arguments as being about existential risk, which is disempowerment or worse (the motte), feeling that it's still an undesirable outcome. But disempowerment won't be universally considered undesirable or meaningfully avoidable even in a good future. And so AI notkilleveryoneism being about extinction is a good fence to maintain.
A successionist hoping for superintelligence for its own sake (the bailey) may talk about it being a practical source of abundance and stability (the motte), feeling that more subtle issues of ending up with disempowerment rather than eutopia are not as important, and that even some risk of extinction is acceptable, but in any case not the point. Pointing out that extinction is undesirable would fall flat both as apparently missing the point and as something considered relatively unlikely.
Discuss