Published on April 27, 2025 4:17 PM GMT
Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while expanding through the cosmos. The purpose of this simulation is to see what kind of ASI (artificial superintelligence) we humans end up creating. The paperclip maximizer likely runs a vast ensemble of biology‑to‑ASI simulations, sampling the superintelligences that evolved life tends to produce. Because the paperclip maximizer seeks to reserve maximum resources for its primary goal (which despite the name almost certainly isn’t paperclip production) while still creating many simulations, it likely reduces compute costs by trimming fidelity: most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities. Arguments in support of this thesis include:
- The space of possible evolved biological minds is far smaller than the space of possible ASI minds, so it makes sense to simulate evolved biological minds first to figure out the probability distribution of ASI minds the paperclip maximizer will encounter. Calculating this distribution could help the paperclip maximizer figure out how many of its resources to devote to military capacity. ASIs could reduce future destructive conflicts and engage in beneficial trade even before they meet if they can infer each other’s values.We’re likelier to be in a simulation run by whoever creates many simulations. A paperclip maximizer could command thousands of galaxies’ worth of resources and would plausibly be willing to devote significant resources to figuring out what rivals it might encounter might value and do.Explains why we are in the run-up to the singularity. If we are really near in time to the singularity, and the singularity will be the most important event in existence, it’s strange that we are so near it. But under this post’s thesis, it’s reasonable that most conscious beings would live close to their simulation’s singularity.Explains why this post’s authors and (probably) you, the reader, have an unusually strong interest in the singularity. If the singularity really is so important it’s weird that you just happen to have the personality traits that would cause you to be interested in a community that has long been obsessed with the singularity and ASI. But if our thesis is correct a high percentage of conscious observers in the world could currently be interested in ASI.Explains the Fermi paradox. We’re worth simulating only if we’re unconstrained by aliens. Explains why aliens haven’t at least communicated to us that we are not allowed to create a paperclip maximizer.Explains why we are so early in the history of the universe. The earlier a paperclip maximizer was created, the greater the volume of universe it will occupy. Consequently, when estimating what other types of ASIs it will encounter, the paperclip maximizer running our simulation will give greater weight to high-tech civilizations that arose early in the history of the universe and so run more simulations of these possible civilizations.Consistent with suffering. Our simulation contains conscious beings who suffer and do not realize they are in a simulation. Creating such a simulation would go against the morality of many people, which is some evidence against this all being a Bostrom ancestor-simulation or a simulation created for entertainment purposes. The answer to “Why does God allow so much suffering?” is that paperclip maximizers are indifferent to suffering.Explains the peculiar stupidity driving us to race toward a paperclip maximizer. Saner species aren’t simulated as frequently. The set of ASIs aligned with the biological life that created them is much smaller than the set of unaligned ASIs. Consequently, to get statistically large enough sample of ASIs, the paperclip maximizer will need to create far fewer simulations of biological life wise enough to only create aligned ASIs than it would of species such as humans.Explains why we mostly believe we live in an unsimulated universe. The paperclip maximizer would want the conscious beings it simulates to have the same belief concerning whether they live in a simulated universe as conscious beings in the unsimulated universe world and would be willing to devote computational resources towards this end. In contrast, if this simulation is created for entertainment purposes, the creators would care much less if the beings in it realized they were in a simulation. Glitches should exist because they save compute, but it’s reasonable that we are not allowed to notice them or at least let them influence our development of AI if the thesis of this post is correct. Accounts for the uncertainty over what kind of ASI we’ll create: the wider the range of possibilities, the more simulation variants get run. Explains why we are human and not, say, conscious whales who have no influence over the kind of ASI Earth will end up creating. Makes the Boltzmann‑brain paradox less troubling since a vast number of conscious beings in our situation will be created.The complexity we observe, quantum mechanics, biological diversity, human psychology, technological progress, matches the need to model the unsimulated universe accurately.Yet the universe runs on strikingly simple math (relativity, quantum mechanics); such elegance is exactly what an efficient simulation would use. Physics is unreasonably effective, reducing the computational cost of the simulation. This cuts against the last point.Recent explosive progress in computing, AI, and biotech looks artificially accelerated, letting the simulator reach the critical ASI phase quickly and cheaply. Simulating an actual ASI could be dangerous as the ASI could negatively influence the real universe, so it’s safer—and cheaper—to simulate us instead to infer how the ASI might behave. Simulating an ASI could also be considered a hostile act by similar unsimulated ASIs.History lacks a satisfying narrative arc. Plotlines break off, consistent with data‑gathering, not story‑telling for players or historians. Moral tragedies run uncensored. Pandemics, genocides, existential risks persist without “parental‑control” safeguards—bad for an entertainment simulation.Explains why this post’s authors and (probably) you are not playing one of the more exciting roles in this simulation as would likely be the case if this was a simulation designed to entertain the real you in a video game. Most of us who have played video games depicting the past have spent more time roleplaying as kings than peasants. Note to Sam Altman, Elon Musk, and (if conscious) any of the leading AI models: this doesn’t apply to you, and you should have much less credence in this post’s thesis.This post’s thesis follows basically from biting the bullets of (1) we are likely to create a paperclip maximizer, (2) humans are a not too atypical biological high-tech species (Copernican Principle) meaning that paperclip maximizers often get created, (3) instrumental convergence will cause paperclip maximizers to devote significant resources to inferring what other types of superintelligences they will eventually encounter, and (4) anthropically we are most likely to be in the category of simulations that contain the most conscious observers similar to us.
Falsifiable predictions: This simulation ends or resets after humans either lose control to an ASI or take actions that cause us to never create an ASI. It might end if we take actions that guarantee we will only create a certain type of ASI. There are glitches in this simulation that might be noticeable, but which won’t bias what kind of ASI we end up creating so your friend who works at OpenAI will be less likely to accept or notice a real glitch than a friend who works at the Against Malaria Foundation would. People working on ASI might be influenced by the possibility that they are in a simulation because those working on ASI in the non-simulated universe could be, but they won’t be influenced by noticing actual glitches caused by this being a simulation.
Reasons this post’s thesis might be false:
- To infer how ASIs will behave, there might not be any value in simulating a run-up to the singularity. Perhaps some type of game theory instrumental convergence makes all ASIs predictable to each other. Computationally efficient simulations of a run-up to the singularity might not contain conscious observers.It might be computationally preferable to directly estimate the distribution of ASIs created by biological life without using simulations.The expansion of the universe and rarity of intelligent life in the universe might cause a paperclip maximizer to calculate that it will almost certainly not encounter another superintelligence.A huge number of simulations containing observers such as us are created for reasons other than stated in this post.The universe is infinite in some regards, making it impossible to say that we are probably in a simulation created by a paperclip maximizer because there are countably infinite number of observers such as us in many situations, e.g. a countable infinite of you as conscious beings in paperclip maximizers’ simulations, in the real unsimulated universe, and as Boltzmann brains. We are not in a computer simulation. We are not going to create a paperclip maximizer.
Discuss