Published on July 5, 2025 3:02 AM GMT
A theist, minimally, believes in a higher power, and believes that acting in accordance with that higher power's will is normative. The higher power must be very capable; if not infinitely capable, it must be more capable than the combined forces of all current Earthly state powers.
Suppose that a higher power exists. When and where does it exist? To be more precise, I'll use "HPE" to stand for "Higher Power & Effects", to include the higher power itself, its interventionist effects, its avatars/communications, and so on. Consider four alternatives:
- HPEs exist in our past light-cone and our future light-cone.HPEs exist in our past light-cone, but not our future light-cone.HPEs don't exist in our past light-cone, but do in our future light-cone.HPEs exists neither in our past light-cone nor our future light-cone; rather, HPEs exist eternally, outside time.
Possibility 1 would be a relatively normal notion of an interventionist higher power. This higher power would presumably have observable effects, miracles.
Possibility 2 is a strange "God is dead" hypothesis; it raises questions about why the higher power and its interventions did not persist, if it was so powerful. I'll ignore it for now.
Possibility 3 is broadly Singulatarian; it would include theories of AGI and/or biological superintelligence development in the future.
Possibility 4 is a popular theistic metaphysical take, but raises questions of the significance of an eternal higher power. If the higher power exists in a Tegmark IV multiverse way, it's unclear how it could have effects on our own universe. As a comparison, consider someone who believes there is a parallel universe in which there are perpetual motion machines, but does not believe perpetual motion machines are possible in our universe; do they really believe in the existence of perpetual motion machines? Possibility 4 seems vaguely deist, in that the higher power is non-interventionist.
So a simple statement of why I am not a theist is that I am instead a Singulatarian. Only possibilities 1 and 4 seem intuitively theistic, and 4 seems more deist than centrally theist.
I dis-believe possibility 1, because of empirical and theoretical issues. Empirically, science seems to explain the universe pretty well, and provides a default reason to not believe in miracles. Meanwhile, the empirical evidence in favor of miracles is underwhelming; different religions disagree about what the miracles even are. If possibility 1 is true, the higher power seems to be actively trying to hide itself. Why believe in a higher power who wants us not to believe in it?
Theoretically, I'm thinking of the universe as at least somewhat analogous to a clockwork mechanism or giant computer. Suppose some Turing machine runs for a long time. At any point in the run, the past is finite; the history of the Turing machine could only have computed so much. The future could in theory be infinite, but that would be more Singulatarian than theistic.
If extremely hard (impossible according to mainstream physics) computations had been performed in the past, then we would probably be able to observe and confirm them (given P/NP asymmetry), showing that mainstream physics is false. I doubt that this is the case; the burden of proof is on those who believe giant computations happened to demonstrate them.
I realize that starting from the assumption of the universe as a mechanism or computer is not going to be convincing to those who have deep metaphysical disagreements, but I find that this sort of modeling has a theoretical precision and utility to it that I'm not sure how to get otherwise.
Now let's examine possibility 3 in more detail, because I think it is likely. People can debate whether possibility 3 is theistic or not, but Singulatarianism is a more precise name regardless of how the definitional dispute resolves.
If Singulatarianism is true, then there could (probably would?) be a future time at which possibility 1 would be true from the perspective of some agent at that future time. This raises interesting questions about the relationship between theism and Singulatarianism.
One complication is that the existence of a higher power in the future might not be guaranteed. Perhaps human civilization and Earth-originating intelligence declines without ever creating a superintelligence. Even then, presently-distant alien superintelligences may someday intersect our future light-cone. To hedge, I'll say Singulatarians need only consider the existence of a higher power in the future to be likely, not guaranteed.
To examine this possibility from Singulatarian empirical and theoretical premises, I will present a vaguely plausible scenario for the development of superintelligence:
Some human researchers realize that a somewhat-smarter-than-human being could be created, by growing an elephant with human brain cells in place of elephant brain cells. They initiate the Ganesha Project, which succeeds at this task. The elephant/human hybrid is known as Hebbo, which stands for Human/Elephant Brain/Body Organism. Hebbo is not truly a higher power, although is significantly smarter than the smartest humans. Hebbo proceeds to lay out designs for the creation of an even smarter being. This being would have a very large brain, taking up the space of multiple ordinary-sized rooms. The brain would be hooked up to various computers, and robotic and biological actuators and sensors.
The humans, because of their cultural/ethical beliefs, consider this a good plan, and implement it. With Hebbo's direction, they construct Gabbo, which stands for Giant Awesome Big-Brained Organism. Gabbo is truly a higher power than humans; Gabbo is very persuasive, knows a lot about the world, and organizes state-like functions very effectively, gaining more military capacity than all other Earthly state powers combined.
Humans have a choice to assist or resist Gabbo. But since Gabbo is a higher power, resistance is ultimately futile. The only effect of resistance is to slow down Gabbo's executions of Her plans (I say Her because Gabbo is presumably capable of asexual reproduction, unlike any male organism). So aligning or mis-aligning with Gabbo's will is the primary axis on which human agency has any cosmic effects.
Alignment with Gabbo's will becomes an important, recognized normative axis. While the judgment that alignment with Gabbo's will is morally good is meta-ethically contentious, Gabbo-alignment has similar or greater cultural respect compared with professional/legal ethics, human normative ethics (utilitarianism/deontology/virtue ethics), human religious normativity, and so on.
Humans in this world experience something like "meaning" or "worship" in relation to Gabbo. Alignment with Gabbo's will is a purpose people can take on, and it tends to work out well for them. (If it is hard for you to imagine Gabbo would have use for humans while being a higher power, imagine scaling down Gabbo's capability level until it's at least a plausible transitional stage.)
Let's presumptively assume that meaning really does exist for these humans; meaning-intuitions roughly match the actual structure of Gabbo, at least much better than anything in our current world does. (This presumption could perhaps be justified by linguistic parsimony; what use is a "meaning" token that doesn't even refer to relatively meaningful-seeming physically possible scenarios?) Now, what does that imply about meaning for humans prior to the creation of Gabbo?
Let's simplify the pre-Gabbo scenario a lot, so as to make analysis clearer. Suppose there is no plausible path to superintelligence, other than through creation of Hebbo and then Gabbo. Perhaps de novo AI research has been very disappointing, and human civilization is on the cusp of collapse, after which humans would never have enough capacity to create a superintelligence. Then the humans are faced with a choice: do they create Hebbo/Gabbo, and if so, earlier or later?
This becomes a sort of pre-emptive axis of meaning or normativity: if Gabbo would be meaningful upon being created, then, intuitively, affecting the creation or non-creation of Gabbo would be meaningful prior to Gabbo's existence. Some would incline to create Gabbo earlier, some to create Gabbo later, and others to prevent creating Gabbo. But they could all see their actions as meaningful, due in part to these actions being in relation to Gabbo.
In this scenario, my own intuitions favor the possibility of creating Hebbo/Gabbo, and creating them earlier. I realize it can be hard to justify normative intuitions, but I do really feel these intuitions. I'll present some considerations that incline me in this direction.
First, Gabbo would have capacity to pursue forms of value that humans can't imagine due to limited cognitive capacity. I think, if I had a bigger brain, then I would have new intuitions about what is valuable, and that these new intuitions would be better: smarter, more well-informed, and so on.
Second, Gabbo would be awesome, and cause awesome possibilities that wouldn't have otherwise happened. A civilizational decline with no follow-up of creating superintelligence just seems disappointing, even from a cognitively limited human perspective. Gabbo, meanwhile, would go on to develop great ideas in mathematics, science, technology, history, philosophy, strategy, and so on. Maybe Gabbo creates an inter-galactic civilization with a great deal of internal variety.
Third, there is something appealing about trusting in a higher cognitive being. I tend to think smart ideas are better than stupid ideas. It seems hard to dis-entagle this preference from fundamental epistemic normativity. Gabbo seems to be a transitional stage in the promotion of smart ideas over stupid ones. Promoting smart ideas prior to Gabbo will tend to increase the probability that Gabbo is created (as Gabbo is an incredible scientific and engineering accomplishment); and Gabbo would go on to create and promote even smarter ideas. It is hard for me to get morally worked up against higher cognition itself.
On the matter of timing, creating Gabbo earlier both increases Gabbo's ability to do more before the heat death of the universe, and presumably increases the possibility of eventual creation of Gabbo, due to the looming threat of civilizational decline and eventual decline in Earth-originating intelligence.
The considerations I have offered are not especially rigorous. I'll present a more mathematical argument.
Let us consider some distribution D over "likely" superintelligent values. The values are a vector in the Euclidean space . The distribution could be derived in a variety of ways: by looking at the distribution of (mostly-alien) superintelligences across the universe/multiverse, by extrapolating likely possibilities in our own timeline, and so on.
Let us also assume that D only takes values in the unit sphere centered at zero. This expresses that the values are a sort of magnitude-less direction; this avoids overly weighting superintelligences with especially strong values. (It's easier to analyze this way; future work could deal with preferences that vary in magnitude)
Upon a superintelligence with values U coming into being, it optimizes the universe. The universe state is also a vector in , and the value assigned to this state by the value vector is the dot product of the value vector with the state vector.
Not all states are feasible. To simplify, let's say the feasible states are points in a unit sphere centered around the origin; the feasible states are those with a L2 norm not exceeding 1. The optimal feasible state vector according to a value vector in a unit sphere is, of course, the value vector itself. Let us also say that the default state, with no super-intelligence optimization, is the zero vector; this is because super-intelligence optimization is in general much more powerful than human-level optimization, such that the effects of human-level optimization on the universe are negligible unless mediated by a superintelligence.
Now we can gauge the default alignment between superintelligences: how much does a random superintelligence like the result of another random superintelligence's optimization?
We can write this as:
Where U is the value vector of the optimizing superintelligence, and V is the value vector of the evaluating superintelligence.
Using the summation rule , we simplify:
Using the product rule when X and Y are independent, we further simplify:
This is simply the squared L2-norm of the mean of D. Clearly, it is non-negative. It is only zero when the mean of D equals zero. This doesn't seem likely by default; "most", even "almost all", distributions won't have this property. To say that the mean of D is zero is a conjunctive, specific prediction; meanwhile, to say the mean of D is non-zero is a disjunctive"anti-prediction".
So, according to almost all possible D, a random superintelligence would evaluate a universe optimized according to an independently random superintelligence's values positively in expectation compared to an un-optimized universe.
There are, of course, many ways to question how relevant this is to relatively realistic cases of superintelligence creation. But it at least somewhat validates my intuition that there are some convergent values across superintelligence, that a universe being at-all optimized is generally preferable to it being un-optimized, from a smarter perspective, even one that differs from that of the optimizing superintelligence. (I wrote more about possible convergences of superintelligent optimization in The Obliqueness Thesis, but omit most of these considerations in this post for brevity.)
To get less abstract, let's consider distant aliens. They partially optimize their world according to their values. My intuition is that I would consider this better than them not doing so, if I understood the details. Through the aliens' intelligent optimization, they develop great mathematics, science, philosophy, history, art, and so on. Humans would probably appreciate their art, at least simple forms of their art (intended e.g. for children), more than they would appreciate the artistic value of marginal un-optimized nature on the alien planet.
Backtracking, I would incline towards thinking that creation of a Gabbo-like superintelligence is desirable, if there is limited ability to determine the character of the superintelligence, but rather a forced choice between doing so and not. And earlier creation seems desirable in the absence of steering ability, due to earlier creation increasing the probability of creation happening at all, given the possible threat of civilizational decline.
Things, of course, get more complicated when there are more degrees of freedom. What if Gabbo represents only one of a number of options, which may include alternative biological pathways not involving human brain cells? What about some sort of octopus uplifting, for example? Given a choice between a few alternatives, there are likely to be disagreements between humans about which alternative is the most desirable. They would find meaningful, not just the question of whether to create superintelligence, but which one to create. They may have conflicts with each other over this, which might look vaguely like conflicts between people who worship different deities, if it is hard to find rational grounding for their normative and/or metaphysical disagreements.
What if there are quite a lot of degrees of freedom? What if strong AI alignment is possible in practice for humans to do, and humans have the ability to determine the character of the coming superintelligence in a great deal of detail? While I think there are theoretical and empirical problems with the idea that superintelligence alignment is feasible for humans (rather than superintelligences) to actually do, and especially with the strong orthogonality thesis, it might be overly dogmatic to rule out the possibility.
A less dogmatic belief, which the mathematical modeling relates to, is that there are at least some convergences in superintelligent optimization; that superintelligent values are not so precisely centered at zero that they don't value optimization by other superintelligences at all; that there are larger and smaller attractors in superintelligent value systems.
The "meaning", "trust in a higher power", or "god-shaped hole" intuitions common in humans might have something to connect with here. Of course, the details are unclear (we're talking about superintelligences, after all), and different people will incline towards different normative intuitions. (It's also unclear whether there are objective rational considerations for discarding meaning-type intuitions; but, such considerations would tend to discard other value-laden concepts, hence not particularly preserving human values in general.)
I currently incline towards axiological cosmism, the belief that there are higher forms of value that are difficult or impossible for humans to understand, but which superintelligences would be likely to pursue. I don't find the idea of humans inscribing their values on future superintelligences particularly appealing in the abstract. But a lot of these intuitions are hard to justify or prove.
What I mainly mean to suggest is that there is some relationship between Singulatarianism and theism. Is it possible to extrapolate the theistic meaning that non-superintelligent agents would find in a world that contains at least one superintelligence, backwards to a pre-singularity world, and if so, how? I think there is room for more serious thought on the topic. If you're allergic to this metaphysical framing, perhaps take it as an exercise in clarifying human values in relation to possible future superintelligences, using religious studies as a set of empirical case studies.
So, while I am not a theist, I can notice some cognitive similarities between myself and theists. It's not hard to find some areas of overlap, despite deep differences in thought frameworks. Forming a rigorous and comprehensive atheistic worldview requires a continual exercise of careful thought, and must reach limits at some point in humans, due to our cognitive limitations. I think there are probably much higher intelligences "out there", both distant alien superintelligences and likely superintelligences in our quantum multiversal future, who deserve some sort of respect for their beyond-human cognitive accomplishments. There are deep un-answered questions regarding meaning, normativity, the nature of the mind, and so on, and it's probably possible to improve on default atheistic answers (such as existentialism and absurdism) with careful thought.
Discuss