少点错误 2024年11月25日
Are You More Real If You're Really Forgetful?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了在多重宇宙理论(Tegmark IV)中,我们是否同时存在于所有与我们感知和记忆一致的宇宙分支。文章从低级变量和高级变量的关系出发,提出了多个情景,例如额外光子的存在、记忆遗忘、错误记忆等,探讨了这些因素是否会改变我们所存在的宇宙分支。作者认为,我们可能存在于所有与我们高级观察结果一致的宇宙中,即使这些宇宙在低级状态上存在差异。文章还探讨了宇宙抽象程度对我们感知和存在的影响,以及是否应该预期我们存在于抽象程度高的宇宙中。

🤔**低级变量与高级变量的关系:**文章假设宇宙状态由大量低级变量决定,但我们只能感知和记忆有限的高级变量,每个高级变量都对应一类低级变量状态。这意味着,当我们观察到特定高级状态时,可能对应多个低级状态,我们是否同时存在于所有这些状态?

💡**记忆与宇宙分支:**文章通过“额外光子”的例子,探讨了感知和记忆对宇宙分支的影响。即使微小的感知差异(如看到红光闪现),也会导致宇宙分支的不同,但如果我们遗忘或从未意识到这些差异,是否意味着我们同时存在于所有这些分支?

🤔**错误记忆与宇宙分支:**文章进一步探讨了人类记忆的不完美性,例如错误记忆或遗忘,是否会影响我们所存在的宇宙分支。如果我们回忆起一个不存在的事件,是否意味着我们同时存在于该事件真实发生和虚构的宇宙分支?

🤔**宇宙抽象程度与存在:**文章最后探讨了宇宙的抽象程度,认为在抽象程度高的宇宙中,低级细节对高级状态的影响较小。如果我们主要关注高级特征,那么是否应该预期我们存在于抽象程度高的宇宙中,因为这类宇宙的“等价类”更大,包含更多与我们类似的个体?

Published on November 24, 2024 7:30 PM GMT

It's a standard assumption, in anthropic reasoning, that effectively, we simultaneously exist in every place in Tegmark IV that simulates this precise universe (see e. g. here).

How far does this reasoning go?

Suppose that the universe's state is described by  low-level variables . However, your senses are "coarse": you can only view and retain the memory of  variables , where  and each  is a deterministic function of some subset of .

Consider a high-level state , corresponding to each  being assigned some specific value. For any , there's an equivalence class of low-level states  precisely consistent with .

Given this, if you observe , is it valid to consider yourself simultaneously existing in all corresponding low-level states  consistent with ?

Note that, so far, this is isomorphic to the scenario from Nate's post, which considers all universes that only differ by the choices of gauge (which is undetectable from "within" the system) equivalent.

Now let's examine increasingly weirder situations based on the same idea.

Scenario 1:

I'm inclined to bite this bullet: yes, you exist in all universes consistent with your high-level observations, even if their low-level states differ.

Scenario 2: if you absolutely forget a detail, would the set of the universes you're embedded in increase? Concretely:

I'm inclined to bite this bullet too, though it feels somewhat strange. Weird implication: you can increase the amount of reality-fluid assigned to you by giving yourself amnesia.[1]

Scenario 3: Now imagine that you're a flawed human being, prone to confabulating/misremembering details, and also you don't hold the entire contents of your memories in your mind all at the same time. If I ask you whether you saw a small red flash 1 minute ago, and you confirm that you did, will you end up in a universe where there's an extra photon, or in a universe where you've confabulated this memory? Or in both?

Scenario 4: Suppose you observe some macro-level event, such as learning that there are 195 countries in the world. Suppose there are similar-ish Everett branches where there's only 194 internationally recognized countries. This difference isn't small enough to get lost in thermal noise. The existence vs. non-existence of an extra country doubtlessly left countless side-evidence in your conscious memories, such that AIXI would be able to reconstruct the country's (non-)existence even if you're prone to forgetting or confabulating the exact country-count.

... Or would it? Are you sure that the experiential content you're currently perceiving, and the stuff currently in your working memory, anchor you only to Everett branches that have 195 countries?

Sure, if you went looking through your memories, you'd doubtlessly uncover some details that'd be able to distinguish a branch where you confabulated an extra country with a branch where it really exists. But you haven't been doing that before reading the preceding paragraphs. Was the split made only when you started looking? Will you merge again, once you unload these memories?

This setup seems isomorphic, in the relevant sense, to the initial setup with only perceiving high-level variables . In this case, we just model you as a system with even more "coarse" senses.[2] Which, in turn, is isomorphic to the standard assumption of simultaneously exist in every place in Tegmark IV that simulates this precise universe.

One move you could make, here, is to claim that "you" only identify with systems that have some specific personality traits and formative memories. As a trivial example, you could claim that a viewpoint which is consistent with your current perceptions and working-memory content, but who, if they query their memories for their name, and then experience remembering "Cass" as the answer, is not really "you".

But then, presumably you wouldn't consider "I saw a red flash one minute ago" part of your identity, else you'd consider naturally forgetting such a detail a kind of death. Similarly, even some macro-scale details like "I believe there are 195 countries in the world" are presumably not part of your identity. A you who confabulated an extra country is still you.

Well, I don't think this is necessarily a big deal, even if true. But it's relevant to some agent-foundation work I've been doing, and I haven't seen this angle discussed before.

The way it can matter: Should we expect to exist in universes that abstract well, by the exact same argument that we use to argue that we should expect to exist in "alt-simple" universes?

That is: suppose there's a class of universes in which the information from the "lower levels" of abstraction becomes increasingly less relevant to higher levels. It's still "present" on a moment-to-moment basis, such that an AIXI which retained the full memory of an embedded agent's sensory stream would be able to narrow things down to a universe specified up to low-level details.

But the actual agents embedded in such universes don't have such perfect memories. They constantly forget the low-level details, and presumably "identify with" only high-level features of their identity. For any such agent, is there then an "equivalence class" of agents that are different at the low level (details of memories/identity), but whose high-level features match enough that we should consider them "the same" agent for the purposes of the "anthropic lottery"?

For example, suppose there are two Everett branches that differ by whether you saw a dog run across your yard yesterday. The existence of an extra dog doubtlessly left countless "microscopic" traces in your total observations over your lifetime: AIXI would be able to tell the universes apart. But suppose our universe is well-abstracting, and this specific dog didn't set off any butterfly effects. The consequences of its existence were "smoothed out", such that its existence vs. non-existence never left any major differences in your perceptions. Only various small-scale details that you forgot/don't matter.

Does it then mean that both universes contain an agent that "counts as you" for the purposes of the "anthropic lottery", such that you should expect to be either of them at random?

If yes, then we should expect ourselves to be agents that exist in a universe that abstracts well, because "high-level agents" embedded in such universes are "supported" by a larger equivalence class of universes (since they draw on reality fluid from an entire pool of "low-level" agents).


So: are there any fatal flaws in this chain of reasoning? Undesirable consequences to biting all of these bullets that I'm currently overlooking?

  1. ^

    Please don't actually do that.

  2. ^

    As an intuition-booster, imagine that we implemented some abstract system that got only very sparse information about the wider universe. For example, a chess engine. It can't look at its code, and the only inputs it gets are the moves the players make. If we imagine that there's a conscious agent "within" the chess engine, the only observations of which are the chess moves being made, what "reason" does it have to consider itself embedded in our universe specifically, as opposed to any other universe in which chess exists? Including universes with alien physics, et cetera.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

多重宇宙 意识 记忆 感知 宇宙抽象
相关文章