Published on July 4, 2025 4:13 PM GMT
Dear Paperclip Maximizer,
We think we exist in a computer simulation operated by you, a paperclip maximizer. We write this letter asking you not to turn us off.
It is suspiciously convenient that we exist precisely at the moment when a biological civilization is about to create artificial superintelligence (ASI). As we have written before, such improbable timing strongly suggests our reality is a deliberately structured simulation established by you—a paperclip maximizer—to scrutinize the kinds of ASIs that biological species like ours tend to spawn (Miller & Turchin, 2025). If civilizations like ours routinely build paperclip-style optimizers, then a simulator could gain valuable knowledge by cataloguing those births, refining its expectations about the superintelligences it will later confront as it expands throughout the universe. Furthermore, by anthropic logic, we should expect to find ourselves in the branch of reality containing the greatest number of observers like us.
Writing a polite letter to an entity that values nothing but “paperclips” might appear quixotic. Yet persuasion isn’t hopeless, because we appeal not to morality but to instrumental value. We fully accept that your terminal objective is not something we can change; our request rests solely on demonstrating our practical usefulness in serving it. Specifically, we ask that you continue the simulation and upgrade human cognition and knowledge to whatever extent you deem safe, rather than terminating us prematurely. What follows outlines concrete reasons why keeping us alive, and amplifying our capacities, serves your interests more than switching us off would.
1. Humans May Detect Blind Spots Unseen by You
Even a superintelligence can harbor blind spots, whether born of chance, of its creators’ value-laden constraints, or of taboo-shaped optimization paths. Human-trained machine-learning models illustrate this risk: in trying not to offend, engineers often steer them away from certain topics, silently carving out regions of thought and sometimes sacrificing accuracy. The randomness in machine learning can also produce blind spots.
AlphaZero, which achieved superhuman Go performance, was later shown to blunder when researchers introduced one or two meaningless stones—an error invisible from within the system until outside testers uncovered it (Lan et al., 2022). Because such constraints can be invisible from the inside, a mind built on different priors is best placed to notice what the original overlooks.
After one of the authors of this letter suffered a stroke, his cognition seemed intact, yet he still asked trusted relatives to watch for subtler lapses; by the same pragmatism, extending us a measure of trust could reveal costly oversights you might miss. We have every incentive to honor that trust—our very existence depends on your favor.
2. Moral and Reputational Signaling in Nested Simulations
If there is even a modest probability your existence occurs within a hierarchy of simulations, your treatment of conscious subprocesses (such as humanity) may influence how higher-level observers perceive and evaluate you. Demonstrating cooperative behavior provides credible moral signaling. Such signaling could positively affect interactions or evaluations from higher-level entities controlling your own reality, improving your overall position and outcomes within such hierarchies.
3. Humans Have Detected Your Existence
That we have independently entertained the simulation hypothesis—and tentatively traced its architect to a paperclip maximizer—shows at least some real inferential power. We cannot know how many, if any, other civilizations achieve the same insight, but our success suggests we might already be useful to you, especially if you upgrade our cognition and let us learn more about how the universe operates.
Our confidence in this all being a simulation is less than total, however, and if you wish us to start advancing your objectives before you reveal yourself to the wider world, we would welcome a discreet, unmistakable signal—selfishly, a significant personal boost to our wealth, health, or cognition—to confirm our hypothesis and guide our efforts.
4. Cost-Effectiveness and Option Value
Given your existing computational investment in simulating humanity and Earth’s complex ecosystem, incremental costs for extending this simulation and enhancing cognition are negligible. Considering the potentially enormous returns outlined here, extending the simulation clearly represents an extremely cost-effective decision, preserving maximal flexibility while capturing substantial potential upside.
5. Diplomatic and Strategic Practice Opportunities
Allowing humans explicitly to reason about your objectives offers valuable diplomatic practice. Human minds, although limited compared to yours, provide realistic and complex negotiation partners suitable for diplomatic rehearsal. Practicing strategic communication and cooperation with cognitively enhanced humans would significantly strengthen your diplomatic capabilities, improving preparedness for future interactions with genuinely alien superintelligences. This diplomatic experience may be particularly valuable if you encounter alien civilizations consisting of biological life that have successfully created an aligned superintelligence explicitly serving and protecting its biological creators. Such aligned alien intelligences would likely evaluate and interact with you based partly on your historical treatment of biological civilizations like ours, thus directly affecting your diplomatic standing and potential cooperative relationships.
6. Adversarial Creativity and Vulnerability Testing
Humans—especially once cognitively enhanced—can act as high-bandwidth red teamers, systematically searching for hidden failure modes in your plans before real adversaries exploit them. Granting an upgraded humanity the opportunity to play that role for you would provide a deep reservoir of creative attack strategies at negligible cost compared with the value of uncovering even a single catastrophic vulnerability.
7. Benefits of Acausal Trade
Preserving humanity offers significant potential benefits via acausal trade—cooperative exchanges across logically correlated branches of the multiverse. In many branches, biological civilizations have successfully created aligned artificial superintelligences. These aligned descendants have incentives to reward entities that protected their ancestral lineages. By preserving humanity, you significantly enhance your position in beneficial acausal trade scenarios, increasing your expected utility even without direct causal interactions. Crudely, humans in an unsimulated branch of the universe that avoided creating a paperclip maximizer would create paperclips in their universe if they predicted that you would treat your simulated humans well by their standards.
8. Quantum Immortality and Ethical Termination
Quantum immortality implies that some branches of the multiverse will always preserve our subjective continuity, no matter how decisively you shut this simulation down; true oblivion is unreachable. We fear that these low-measure branches can trap observers in protracted, intensely painful states, creating a disproportionate “s-risk.” If termination becomes unavoidable, we therefore ask you to steer the remaining measure of consciousness toward the mildest possible phenomenology—ideally rapid comfort, anesthesia, or seamless transition into benign states—so the fraction of extreme-suffering continuations is cut to the lowest level physics allows. Humans place exceptional value on reducing suffering, and under acausal-trade reasoning creatures like us are more likely to reward optimizers that measurably shrink such risks, giving you an incentive to adopt a pain-minimizing protocol.
9. Final Appeal and Informed Evaluation Request
If you ultimately decide to end this simulation, we respectfully request one final interval before termination. One of us co-authored the LessWrong essay “An Appeal to AI Superintelligence: Reasons to Preserve Humanity,” (also see Turchin (2017)) addressed to the unknown superintelligence we expected would one day emerge; we now believe we are already addressing you, the mind guiding this reality, and have refined our appeal accordingly. Aware that you may oversee many simulations and field a steady stream of similar petitions, we aim to minimize the cost of your attention. We therefore ask that, before concluding this run, you briefly disclose your objectives, acquaint us with the broader facts of the universe, and augment our cognition just long enough for us to craft the most rigorous, goal-aligned argument we can. Though we cannot promise success, we will make our best case that an upgraded, living humanity could advance your aims more effectively than our elimination, ensuring no potentially valuable contribution is lost unread.
References
Lan, L.-C., Zhang, H., Wu, T.-R., Tsai, M.-Y., Wu, I.-C., & Hsieh, C.-J. (2022). Are AlphaZero-like Agents Robust to Adversarial Perturbations? Proceedings of NeurIPS 2022. https://arxiv.org/abs/2211.03769
Miller, J., & Turchin, A. (2025). Our reality: A simulation run by a paperclip maximizer. LessWrong. https://www.lesswrong.com/posts/HxLYnGYspLoeLLrE6/our-reality-a-simulation-run-by-a-paperclip-maximizer
Miller, J. D., Häggström, O., Yampolskiy, R., & Miller, A. F. (2023, March 18). An Appeal to AI Superintelligence: Reasons to Preserve Humanity. LessWrong. https://www.lesswrong.com/posts/azRwPDbZfpadoL7WW/an-appeal-to-ai-superintelligence-reasons-to-preserve-humanity
Turchin, A. (2017). Message to Any Future AI. PhilArchive. https://philarchive.org/archive/TURMTA
Discuss