少点错误 2024年12月31日
Dusty Hands and Geoarbitrage
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了在有效利他主义(EA)文化中,对触及“神圣”观念的干预措施的接受度问题。文章指出,EA社区在早期能够理性讨论具有巨大潜在价值但触及禁忌话题的干预措施,但现在这种能力似乎有所下降。文章分析了Open Philanthropy(OP)的资助黑名单,发现其主要排除了涉及数字心智、野生动物福利、人类基因工程等触及人们深层价值观的领域。作者认为,这种趋势限制了EA社区的批判性思维,并可能导致其效力降低。文章最后提出,可以通过地理套利、寻找不同文化背景下的支持,以及构建平行文化来解决这些问题。

👐'神圣'观念对有效利他主义的干预产生了影响:文章指出,许多具有高影响力的干预措施,如数字心智研究、野生动物福利和人类基因工程,由于触及了人们根深蒂固的“神圣”观念,在EA社区中受到了排斥,这阻碍了这些领域的研究和发展。

📉EA社区的批判性思维能力下降:文章通过对比2019年和2022年EA论坛对类似主题帖子的反应,指出EA社区在理性讨论禁忌话题的能力有所下降,这种变化可能导致其无法充分评估一些潜在的高效干预措施。

🚫Open Philanthropy的资助黑名单:文章分析了OP的资助黑名单,揭示了其对触及“神圣”观念领域的排斥,包括数字心智的道德相关性、野生动物和无脊椎动物的福利、人类基因工程以及政治右倾的观点。这种黑名单不仅限制了资助范围,还对相关组织产生了寒蝉效应。

🌍地理套利与文化差异:文章提出,可以通过地理套利来规避禁忌话题的限制,例如在对数字心智有不同看法的东亚国家,或对人类基因工程有较高接受度的印度,寻求研究和资金支持。这为解决当前困境提供了新的思路。

👯‍♀️构建平行文化:文章建议,可以构建一个与现有EA文化平行的文化,以不同的名称和方式运作,并吸引不同的资金来源,例如右翼资金。这为探索更多元的资助方式提供了可能性。

Published on December 30, 2024 7:52 PM GMT

There is the sacred, there is the mundane and there is rent. In a civilization with decent epistemology, you will find mundane problems with obvious solutions will likely already be addressed, save for those a rent-extractor has some moat around. And even those can fall when things get shaken up. But what is sacred is not so easily thought about. So a good source of underfunded interventions will likely be those that impinge on the sacred. 

Consider a civilization where hands are sacred and washing them is considered a sin. Wiping hands on a dry towel, though shameful, is allowed in private.  But anything more is an insult to god, and gloves considered a barrier between man and the world God created for him. Standard sanitation becomes difficult - surgery an invitation to sepsis. 

Here we have a world with a lot of cheap utils up for grabs. And Earth's effective altruists would have obvious interventions to fund. But let's imagine EA culture (at least what I see in the modern EA Forum) is a child of this world and of this dry-handed culture. This is approximately how I would expect them to react to an intervention that impinges on this sacred topic.

 

There was a satirical post I wrote for the EA Forum when it first started - which I never bothered publishing as it was slightly mean-spirited. I had been reading Mormon history at the time and I was impressed with the power of starting a cult. And it struck me that if EAs continued tithing, ritualized somewhat, and enforced fecundity norms, the expected impact was likely enormous. The fact that this idea was actually surprisingly strong and seemed maximally disgusting to the type of person interested in EA was amusing to me. 

Despite not posting, I never doubted that if I did a good job and wrote it well, I would not be massively downvoted for posting such a disgusting idea. To use a cringe term, there was much "low decoupler" nature in EA Forum back then, and I would have expected counterarguments not downvotes provided my post was intelligent. This is now mostly dead.

To illustrate what we have lost, it's instructive to look at the response to two posts of similar quality: one from 2019 and one from 2022.

Link

Here we see a community capable of considering an intervention with enormous potential upside that impinges on the sacred. When I read that at the time, I remember thinking, wow, these people actually can have a conversation without going mad. Though the author was a complete fool for using that pseudonym, it is notable that the commentariat was capable of addressing the actual arguments. The fact that they could do so even with his inflammatory name is a credit to them.

Now let's compare EA's reaction a similarly sacred-impinging post in 2022:

Link

 

In a post filled with interesting arguments and data, the top comment is an extremely bad-faith reading that seems to apply 0 probability to its premises without argument. Reading this thread is what black-pilled me on EA, and I predicted then that it would become indistinguishable from the NGO blob relatively quickly. 

Let's see how that prediction is faring.

habryka summarizes the areas Open Phil will blacklist an organization for funding:

You can see some of the EA Forum discussion here: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures?commentId=RQX56MAk6RmvRqGQt 

The current list of areas that I know about are: 

    Anything to do with the rationality community ("Rationality community building")Anything to do with moral relevance of digital mindsAnything to do with wild animal welfare and invertebrate welfareAnything to do with human genetic engineering and reproductive technologyAnything that is politically right-leaning

There are a bunch of other domains where OP hasn't had an active grantmaking program but where my guess is most grants aren't possible: 

    Most forms of broad public communication about AI (where you would need to align very closely with OP goals to get any funding)Almost any form of macrostrategy work of the kind that FHI used to work on (i.e. Eternity in Six Hours and stuff like that)Anything about acausal trade of cooperation in large worlds (and more broadly anything that is kind of weird game theory)

And again, this is a blacklist not just a funding preference. It  casts a pall on any organization that funds multiple projects and wants Open Phil funding for at least one of them.

If you "withdraw from a cause area" you would expect that if you have an organization that does good work in multiple cause areas, then you would expect you would still fund the organization for work in cause areas that funding wasn't withdrawn from. However, what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations, where if you are associated with a certain set of ideas, or identities or causes, then no matter how cost-effective your other work is, you cannot get funding from OP. 

With the exception of avoiding rationalists (and I can I really blame Moskovitz for that?) the Open Phil blacklist is a list of things that impinge on the sacred: 

Digital minds impinges on our intuitions of having an immaterial soul that is still powerful even in secular western culture.

Wild animal welfare impinges on the sacred notion of a benevolent mother nature.

Human genetic engineering impinges on the sacred notion of human equality.

And "anything that is politically right-leaning" impinges on the sacred notion that Ezra Klien is correct about everything.

Disincentivizing research into the welfare of digital minds alone can undo any good many of times over. There are consequences to lobotomizing one of the few cultures with good enough epistemology to think critically about sacred issues. And much good can be undone. It's plausible to me enough good can be undone that one would have been better off buying yachts. 

But regardless of Moskovitz's desire to keep his hands dirty, we are still left with the question of how one funds taboo-but-effective interventions given the obvious reputational risks. 

I think there may be a sort of geographical reputational arbitrage that is under-explored. Starting with a less-controversial example, east Asian countries seem to have less-parochial notions of human and machine consciousness. And it plausibly has less political valence there. Raising or deploying funds in Japan and Korea and perhaps even China if possible might be worth investigating. 

In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.

As for wild-animal welfare I don't have any ideas about similar arbitrages but I think it may be worth some smarter minds time to think over this question for five minutes.

And in terms of "Anything right-leaning" a parallel EA culture, preferably with a different name,  able to cultivate right-wing funding sources might be effective. Though there is a relative paucity of Peter Thiels in the world, and one might focus on propaganda campaigns to try to make the right-leaning into the left-leaning instead. There is an obvious redistributional case for genetic engineering (what is a high-IQ if not unearned genetic privilege?) for example that can be maybe be framed in a left-wing manner.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

有效利他主义 禁忌话题 批判性思维 资助策略 文化差异
相关文章