少点错误 2024年07月29日
Relativity Theory for What the Future 'You' Is and Isn't
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了“我”的概念,作者认为“我”包含三个要素:当下思维、记忆和未来。作者提出,对于克隆或上传后的“我”,不存在任何道德义务,我们应该根据自己的直觉决定如何看待他们。作者还强调,这种观点并非道德相对主义,而是关于个人主观偏好和“我”概念的相对论视角。

🤔 作者认为“我”包含三个要素:当下思维、记忆和未来。 作者认为,我们应该根据自己的直觉决定如何看待未来克隆或上传后的“我”,不存在任何道德义务。我们应该关注的是我们对未来“我”的主观感受,而不是试图寻找一个客观的答案。 这种观点源于作者对“我”的理解,即“我”只存在于当下思维,未来“我”只是继承了记忆的“新我”。 作者认为,这种观点并非道德相对主义,而是关于个人主观偏好和“我”概念的相对论视角。我们应该根据自己的主观感受,决定如何看待未来“我”。

🧠 作者认为,我们对未来“我”的关心程度取决于我们的主观感受,不存在任何“正确”或“错误”的方式。 作者认为,我们可以根据自己的感受,决定如何看待未来“我”,例如,我们可以决定将金条放在谁的床边。 作者认为,这种观点可以简化许多关于“我”的困境,例如,当我们被克隆时,我们应该如何看待“我”的概念。

🤝 作者认为,这种观点可以让我们更好地理解利他主义。 作者认为,我们对他人表现出的利他主义往往是随机的,我们没有一个明确的标准来决定我们应该关心谁,关心到什么程度。 作者认为,这种观点可以让我们更好地理解,我们对未来“我”的关心程度也是随机的,没有必要强求一个固定的答案。我们应该接受这种不确定性,并根据自己的感受决定如何看待未来“我”。

Published on July 29, 2024 2:01 AM GMT

"Me" encompasses three constituents: this mind here and now, its memory, and its cared-for future. There follows no ‘ought’ with regards to caring about future clones or uploadees, and your lingering questions about them dissipate.

In When is a mind me?, Rob Bensinger suggests three Yes follow for:

    If I expect to be uploaded tomorrow, should I care about the upload in the same ways (and to the same degree) that I care about my future biological self?Should I anticipate experiencing what my upload experiences?If the scanning and uploading process requires destroying my biological brain, should I say yes to the procedure?

I say instead: Do however it occurs to you, it’s not wrong! And if tomorrow you changed your mind, it’s again not wrong.[1] So the answers here are:

    Care however it occurs to you!Well, what do you anticipate experiencing? Something or nothing? You anticipate whatever you do anticipate and that’s all there is to know—there’s no “should” here.Say what you fee like saying. There’s nothing inherently right or wrong here, as long as it aligns with your actual internally felt, forward-looking preference for the uploaded being and the physically to-be-eliminated future being.

Clarification: This does not imply you should never wonder about what you actually want. It is normal to feel confused at times about our own preferences. What we must not do, is insist on reaching a universal, 'objective' truth about it.

So, I propose there’s nothing wrong with being hesitant as to whether you really care about the guy walking out of the transporter. Whatever your intuition tells you, is as good as it gets in terms of judgement. It’s neither right nor wrong. So, I advocate a sort of relativity theory for your future, if you will: Care about whosever fate you happen to, but don’t ask whom you should care about in terms of successors of yours.

I conclude on this when starting from a rather similar position as that posited by Rob Bensinger. The take is based on only two simple core elements:

    The current "me" is precisely my current mind at this exact moment—nothing more, nothing less.This mind strongly cares about its 'natural' successor over the next milliseconds, seconds, and years, and it cherishes the memories from its predecessors. "Natural" feels vague? Exactly, by design!

Implication

In the absence of cloning and uploading, this is essentially the same as being a continuous "self." You care so deeply about the direct physical and mental successors of yours, you might as well speak of a unified 'self'. Rob Bensinger provides a more detailed examination of this idea, which I find agreeable. With cloning, everything remains the same, except for a minor detail—if  we're open to it, it does not create any complications in otherwise perplexing thought experiments. Here's how it works:

    Your current mind is cloned or transported. The successors simply inherit your memories, each in turn developing their own concern for their successors holding their memories, and so forth.How much you care for future successors, or for which successor, is left to your intuition. There's nothing more to say! There's no right or wrong here. We may sometimes be perplexed about how much we care for which successor in a particular thought experiment, but you may adopt a perspective as casually, quickly, and baselessly as you happen to; there's nothing wrong with any view you may hold. Nothing harms you (or at least not more than necessary), as long as your decisions are in line with the degree of regard you have, you feel, for the future successors in question.

Is it practicable?

Can we truly live with this understanding? Absolutely. I am myself right now, and I care about the next second's successor with about a '100%' weight: just as much as for my actual current self, under normal circumstances. Colloquially, even in our own minds, we refer to this as "we're our continuous self." But tell yourself that’s rubbish. You are only the actual current moment's you, and the rest are the successors you may deeply care about. This perspective simplifies many dilemmas: You fall asleep in your bed, someone clones you and places the original you on the sofa, and the clone in your bed—who is "you" now? Traditional views are often confounded—everyone has a different intuition. Maybe every day you have a different response, based on no particular reason. And it's not your fault; we're simply asking the wrong question.

By adopting the relativity viewpoint, it becomes straightforward. Maybe you anticipate and want to ensure the right person receives the gold bar upon waking, so you place it where it feels most appropriate according to your feelings towards the two. Remember, you exist just now, and everything future comprises new selves, for some of which you simply have a particular forward-looking care. Which one do you care more about? That decision should guide where you place the gold bar.

Vagueness – as so often in altruism

You might say it’s not easy. You can’t just make up your mind so easily about whom to care for. It resonates with me. Ever dived into how humans show altruism towards others? It’s not exactly pretty. Not just because absolute altruism is unbeautifully small but simply because: We don’t have good, quantitative, answers as to whom we care about how much. We’re extremely erratic here: one minute we might completely ignore lives far away, and the next, a small change in the story can make us care deeply. And, so it may also be for your feelings towards future beings inheriting your memories and starting off with your current brain state. You have no very clear preferences. But here’s the thing—it’s all okay. There’s no “wrong” way to feel about which future mind to care about, so don’t sweat over figuring out which one is the real “you.” You are who you are right now, with all your memories, hopes, and desires related to one or several future minds, especially those who directly descend from you. It’s kind of like how we feel about our kids; no fixed rules on how much we should care.

Of course, we can ask from a utilitarian perspective, how you should care about whom, but that’s a totally separate question, as it deals with aggregate welfare, and thus exactly not with subjective preference for any particular individuals.

More than a play on words?

You may call it a play on words, but I believe there's something 'resolving' in this view (or in this 'definition' of self, if you will). And personally, the thought that I am not in any absolute sense the person who will wake up in that bed I go to sleep in now is inspiring. It sometimes motivates me to care a bit more about others than just myself (well, well, vaguely). None of these final points justify the proposed view in any ultimate way, of course.

  1. ^

     This sounds like moral relativism but has nothing to do with it. We might be utilitarians and agree every being has a unitary welfare weight. But that’s exactly not what we discuss here. We discuss your subjective (‘egoistical’) preference for you and for potentially the future of what we might or might not call ‘you’.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

自我 克隆 上传 未来 相对论
相关文章