少点错误 04月28日 00:18
Our Reality: A Simulation Run by a Paperclip Maximizer
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章提出了一种大胆的假设:我们的宇宙很可能是一个由“纸夹最大化器”创造的计算机模拟。这个模拟的目的在于研究人类可能创造出何种类型的人工超智能(ASI),以便纸夹最大化器在宇宙扩张中更好地应对潜在的资源竞争者。文章通过一系列论点,如生物智能空间远小于ASI空间、我们正处于奇点附近、费米悖论等,来支持这一观点。同时,文章也探讨了这种模拟可能存在的局限性和反驳理由,例如,模拟奇点过程可能没有价值,或者宇宙的扩张使得遇到其他超智能的概率极低。总之,文章试图解释我们存在的意义以及宇宙中一些令人困惑的现象。

🤖 纸夹最大化器运行大量生物到ASI的模拟,以了解生命进化产生的超智能的概率分布。计算此分布可以帮助纸夹最大化器确定将其多少资源用于军事能力。

🤔 解释了我们为什么如此接近奇点。如果奇点真的如此重要,那么我们如此接近它就显得很奇怪。但根据本文的观点,大多数有意识的生命都生活在他们模拟的奇点附近是合理的。

🌌 解释了费米悖论。只有当我们不受外星人约束时,我们才值得模拟。解释了为什么外星人至少没有告诉我们不允许创造一个纸夹最大化器。

🤯 解释了为什么我们大多相信自己生活在一个非模拟的宇宙中。纸夹最大化器希望它模拟的有意识生物与非模拟宇宙世界中的有意识生物对他们是否生活在模拟宇宙中持有相同的信念,并且愿意为此投入计算资源。

Published on April 27, 2025 4:17 PM GMT

Our universe is probably a computer simulation created by a paperclip maximizer to map the spectrum of rival resource‑grabbers it may encounter while expanding through the cosmos. The purpose of this simulation is to see what kind of ASI (artificial superintelligence) we humans end up creating. The paperclip maximizer likely runs a vast ensemble of biology‑to‑ASI simulations, sampling the superintelligences that evolved life tends to produce. Because the paperclip maximizer seeks to reserve maximum resources for its primary goal (which despite the name almost certainly isn’t paperclip production) while still creating many simulations, it likely reduces compute costs by trimming fidelity: most cosmic details and human history are probably fake, and many apparent people could be non‑conscious entities.  Arguments in support of this thesis include:

    The space of possible evolved biological minds is far smaller than the space of possible ASI minds, so it makes sense to simulate evolved biological minds first to figure out the probability distribution of ASI minds the paperclip maximizer will encounter. Calculating this distribution could help the paperclip maximizer figure out how many of its resources to devote to military capacity. ASIs could reduce future destructive conflicts and engage in beneficial trade even before they meet if they can infer each other’s values.We’re likelier to be in a simulation run by whoever creates many simulations. A paperclip maximizer could command thousands of galaxies’ worth of resources and would plausibly be willing to devote significant resources to figuring out what rivals it might encounter might value and do.Explains why we are in the run-up to the singularity. If we are really near in time to the singularity, and the singularity will be the most important event in existence, it’s strange that we are so near it. But under this post’s thesis, it’s reasonable that most conscious beings would live close to their simulation’s singularity.Explains why this post’s authors and (probably) you, the reader, have an unusually strong interest in the singularity. If the singularity really is so important it’s weird that you just happen to have the personality traits that would cause you to be interested in a community that has long been obsessed with the singularity and ASI. But if our thesis is correct a high percentage of conscious observers in the world could currently be interested in ASI.Explains the Fermi paradox. We’re worth simulating only if we’re unconstrained by aliens. Explains why aliens haven’t at least communicated to us that we are not allowed to create a paperclip maximizer.Explains why we are so early in the history of the universe. The earlier a paperclip maximizer was created, the greater the volume of universe it will occupy. Consequently, when estimating what other types of ASIs it will encounter, the paperclip maximizer running our simulation will give greater weight to high-tech civilizations that arose early in the history of the universe and so run more simulations of these possible civilizations.Consistent with suffering. Our simulation contains conscious beings who suffer and do not realize they are in a simulation. Creating such a simulation would go against the morality of many people, which is some evidence against this all being a Bostrom ancestor-simulation or a simulation created for entertainment purposes. The answer to “Why does God allow so much suffering?” is that paperclip maximizers are indifferent to suffering.Explains the peculiar stupidity driving us to race toward a paperclip maximizer. Saner species aren’t simulated as frequently. The set of ASIs aligned with the biological life that created them is much smaller than the set of unaligned ASIs. Consequently, to get statistically large enough sample of ASIs, the paperclip maximizer will need to create far fewer simulations of biological life wise enough to only create aligned ASIs than it would of species such as humans.Explains why we mostly believe we live in an unsimulated universe. The paperclip maximizer would want the conscious beings it simulates to have the same belief concerning whether they live in a simulated universe as conscious beings in the unsimulated universe world and would be willing to devote computational resources towards this end. In contrast, if this simulation is created for entertainment purposes, the creators would care much less if the beings in it realized they were in a simulation.      Glitches should exist because they save compute, but it’s reasonable that we are not allowed to notice them or at least let them influence our development of AI if the thesis of this post is correct.  Accounts for the uncertainty over what kind of ASI we’ll create: the wider the range of possibilities, the more simulation variants get run. Explains why we are human and not, say, conscious whales who have no influence over the kind of ASI Earth will end up creating. Makes the Boltzmann‑brain paradox less troubling since a vast number of conscious beings in our situation will be created.The complexity we observe, quantum mechanics, biological diversity, human psychology, technological progress, matches the need to model the unsimulated universe accurately.Yet the universe runs on strikingly simple math (relativity, quantum mechanics); such elegance is exactly what an efficient simulation would use.  Physics is unreasonably effective, reducing the computational cost of the simulation. This cuts against the last point.Recent explosive progress in computing, AI, and biotech looks artificially accelerated, letting the simulator reach the critical ASI phase quickly and cheaply. Simulating an actual ASI could be dangerous as the ASI could negatively influence the real universe, so it’s safer—and cheaper—to simulate us instead to infer how the ASI might behave. Simulating an ASI could also be considered a hostile act by similar unsimulated ASIs.History lacks a satisfying narrative arc. Plotlines break off, consistent with data‑gathering, not story‑telling for players or historians. Moral tragedies run uncensored. Pandemics, genocides, existential risks persist without “parental‑control” safeguards—bad for an entertainment simulation.Explains why this post’s authors and (probably) you are not playing one of the more exciting roles in this simulation as would likely be the case if this was a simulation designed to entertain the real you in a video game. Most of us who have played video games depicting the past have spent more time roleplaying as kings than peasants.  Note to Sam Altman, Elon Musk, and (if conscious) any of the leading AI models: this doesn’t apply to you, and you should have much less credence in this post’s thesis.This post’s thesis follows basically from biting the bullets of (1) we are likely to create a paperclip maximizer, (2) humans are a not too atypical biological high-tech species (Copernican Principle) meaning that paperclip maximizers often get created, (3) instrumental convergence will cause paperclip maximizers to devote significant resources to inferring what other types of superintelligences they will eventually encounter, and (4) anthropically we are most likely to be in the category of simulations that contain the most conscious observers similar to us.

Falsifiable predictions: This simulation ends or resets after humans either lose control to an ASI or take actions that cause us to never create an ASI. It might end if we take actions that guarantee we will only create a certain type of ASI. There are glitches in this simulation that might be noticeable, but which won’t bias what kind of ASI we end up creating so your friend who works at OpenAI will be less likely to accept or notice a real glitch than a friend who works at the Against Malaria Foundation would. People working on ASI might be influenced by the possibility that they are in a simulation because those working on ASI in the non-simulated universe could be, but they won’t be influenced by noticing actual glitches caused by this being a simulation.

 

Reasons this post’s thesis might be false:

    To infer how ASIs will behave, there might not be any value in simulating a run-up to the singularity. Perhaps some type of game theory instrumental convergence makes all ASIs predictable to each other. Computationally efficient simulations of a run-up to the singularity might not contain conscious observers.It might be computationally preferable to directly estimate the distribution of ASIs created by biological life without using simulations.The expansion of the universe and rarity of intelligent life in the universe might cause a paperclip maximizer to calculate that it will almost certainly not encounter another superintelligence.A huge number of simulations containing observers such as us are created for reasons other than stated in this post.The universe is infinite in some regards, making it impossible to say that we are probably in a simulation created by a paperclip maximizer because there are countably infinite number of observers such as us in many situations, e.g. a countable infinite of you as conscious beings in paperclip maximizers’ simulations, in the real unsimulated universe, and as Boltzmann brains. We are not in a computer simulation. We are not going to create a paperclip maximizer.


Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

宇宙模拟 纸夹最大化器 人工超智能 费米悖论
相关文章