少点错误 2024年11月27日
Wagering on Will And Worth (Pascals Wager for Free Will and Value)
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了自由意志和客观价值的问题。认为无论是否有自由意志,在有自由意志的世界中,相信并依此行动极具价值;在客观价值方面,应假定其存在并努力寻找,同时提出了一些实践建议,如确保生存、提升形象等。

🎯无论有无自由意志,在有自由意志的世界中,相信并行动有很大价值

💎应假定客观价值存在并努力寻找,错了无损失,对了收益大

🛡️实践中要确保生存,如重视降低风险

💇‍♂️让理性主义者更好地打扮和呈现自己

Published on November 27, 2024 12:43 AM GMT

Epistemic Status: Quite confident (80%?) about the framework being very useful for the subject of free will.  Pretty confident (66%?) about the framework being useful for meta-ethics.  Hopeful (33%) that I am using it to bring out directionally true statements about what my CEV would be in worlds where we have yet to have found objective value.  

Most discussions about free will and meaning seem to miss what I understand to be the point. Rather than endlessly debating the metaphysics, we should focus on the decision-theoretic implications of our uncertainty. Here's how I[1] think we can do that, using an abstracted Pascal's Wager.

Free Will: A Pointless Debate

People argue endlessly about whether we have Free Will, bringing up quantum mechanics, determinism, compatibilism, blah, blah (blah). But, regardless of if we have it or not:

In worlds where we have no free will:

In worlds where we have free will:

Therefore, if we have free will, believing in it and acting accordingly is incredibly valuable. If we don't have free will, nothing we (choose to) believe matters anyway. The expected value clearly points towards acting as if we have free will, even if we assign it a very low probability (I don't think too much about what numbers should be here[2] but estimate it at 5-20%).

Meaning and All that is Valuable:

I've found I am unfortunately sympathetic to some nihilistic arguments:

Whether through personal passing, civilizational collapse, or the heat death of the universe, all information about our subjective experiences, be they bliss, bitterness, fulfillment, or failure, will eventually be lost. 

Further, even if it truly is just about living in the moment, that which we as humans value - our goals, our emotions, our moral intuitions - are merely mesa-optimizations that contributed to reproductive fitness in our ancestral environment. It would be remarkably convenient if these specific things happened to be what is actually good in any grand, universal sense.  

Nevertheless, through applying similar reasoning (Pascals Wager) to the question of meaning/value I can both integrate what seem to me to be very strong arguments in favor of nihilism and remain within a moral framework that is clearly better (still prescribes action/actually says something). [3]

Consider two possibilities:

    There exists something (objective value) universally compelling that any sufficiently advanced mind (/bayesian agent) would recognize as valuable - something possibly beyond our evolutionary happenstance and/or something timeless[4]There is nothing like objective value

If nothing is objectively valuable:

If objective value exists:

By objective value, I mean something that Bayesian agents would inevitably converge on valuing through Aumann's Agreement Theorem - regardless of their starting points. While we don't know what this is yet (or if it exists), the convergence property is what makes it "objective" rather than just subjective preference. I can imagine this might include conscious experience as a component, but I remain quite uncertain. 

The expected value calculation here is clear: we should act as if objective value exists and try to find it. The downside of being wrong is nothing (literally), while the upside of being right is vast!

What Does This Mean in Practice?

Given this framework, what should we actually do?

Instead of getting lost in meandering metaphysical debates about free will and value, we should act as if we have agency - make plans, take responsibility, and believe our choices matter. The alternative is strictly worse on decision-theoretic terms.

Further, we should try to maximize our odds of finding what is objectively valuable. Currently, I think this is best achieved by:

    Acting to ensure our survival[6] (hence a high priority on x-risk reduction)[7]Getting rationalists to dress/present better[8]Creating more intelligent, creative, and philosophically yearning beings with lots of technologyTurning ourselves (and those we interact with) into a species(es) of techno-philosophers, pondering what is valuable, until we are certain we have either found objective value or that we will never find it.

On a less grand note, I  expect we should be maintaining ethical guardrails towards minimizing suffering and maximizing happiness, as suffering matters in many plausible theories of value (wow, that is quite convenient for me as a human).  Additionally, humans do knowledge-work better when they're both happy and not getting tortured.

The beauty of this approach is that it works regardless of whether we're right about the underlying metaphysics. If we're wrong, we lose nothing. If we're right, we gain everything.

  1. ^

    Meta Note: I've been wanting to write a post about this for a while, but never got around to writing it by myself.  What I did here was had Claude interrogate me for a while about "What is your [my] world model", then had it propose things it thought I would en joy writing a blog post about, then write a rough draft of this.  I've since edited it a decent bit, and gotten feedback from real live humans.  I'd love meta-level commentary on this output of this process/Claude and I's writing.

  2. ^

    See section "Free Will: A Pointless Debate"

  3. ^

    I know, I know, KO'ing such a weak opponent is like bragging about how you're stronger than your grandparent.  

  4. ^

    I'd say defining objective value is my best guess as the weak point of this section.

  5. ^

    Conveniently, in this world my attempts at humor won't matter either!

  6. ^

    Gosh darned corrigibility worming its way into everything!

  7. ^

    While finding objective value is our most important task, we're forced to prioritize technical problems (like AI alignment and other less pressing X-Risk prevention) because we're too close to killing all the smart, creative, philosophically yearning beings we know of. We must ensure survival before we can properly pursue interim goal.

    This creates a kind of nested Pascal's Wager:

      We must bet on the existence of objective valueTo have any chance of finding it, we must bet on our survivalTo survive, we must bet on solving certain technical problems first
  8. ^

    Silly Claude, why did this end up here?  Ohh well, I guess I better justify it: We (aspiring rationalists) have a lot of thoughts that we have good reason to believe would lead to a better world if more people internalized them.  Most of the world, including most of the important people in the world care about appearances (silly mesa-optimizers)!  Putting a small amount of effort in how you look (possibly: getting a haircut, wearing clothes that fit +-10%, trying to avoid sweatpants and printed t-shirts, other stuff you might know to be applicable) helps get people to take you more seriously. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

自由意志 客观价值 实践建议 生存
相关文章