少点错误 03月23日
Dusty Hands and Geo-arbitrage
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了有效利他主义(EA)在面对触及“神圣”领域的干预措施时的困境,以及如何为这些可能有效但备受争议的项目提供资金。作者指出,EA组织在资助项目时,往往会避开涉及数字意识、动物福利、基因工程和右翼政治等领域,这些领域触及了社会根深蒂固的观念。文章建议,可以通过地理位置声誉套利、寻找对相关议题不敏感的地区,以及建立与传统EA不同的组织来规避这些限制,从而为有价值但有争议的干预措施找到资金支持。

💡有效利他主义(EA)在资助项目时,倾向于避开涉及“神圣”领域的话题,例如数字意识、动物福利、基因工程和右翼政治。

🤔Open Philanthropy(OP)对资助领域的限制,实质上反映了对传统价值观的维护,这些价值观在西方文化中根深蒂固。

🌍作者建议通过地理位置声誉套利,例如在对相关议题不敏感的地区(如东亚和印度)开展研究和筹款,以规避西方媒体的负面影响。

🤝文章提倡建立与传统EA不同的组织,以吸引右翼资金,并尝试将有益的、但被视为右翼的理念转化为左翼可接受的观点,从而扩大资金来源。

Published on March 22, 2025 4:05 PM GMT

There is the sacred, there is the mundane and there is rent. In a civilization with decent epistemology, you will find mundane problems with obvious solutions will likely already be addressed, save for those a rent-extractor has some moat around. And even those can fall when things get shaken up. But what is sacred is not so easily thought about. So a good source of underfunded interventions will likely be those that impinge on the sacred. 

Consider a civilization where hands are sacred and washing them is considered a sin. Wiping hands on a dry towel, though shameful, is allowed in private.  But anything more is an insult to god, and gloves considered a barrier between man and the world God created for him. Standard sanitation becomes difficult - surgery an invitation to sepsis. 

Here we have a world with a lot of cheap utils up for grabs. And Earth's effective altruists would have obvious interventions to fund. But let's imagine EA culture (at least what I see in the modern EA Forum) is a child of this world and of this dry-handed culture. This is approximately how I would expect them to react to an intervention that impinges on this sacred topic.

 

There was a satirical post I wrote for the EA Forum when it first started - which I never bothered publishing as it was slightly mean-spirited. I had been reading Mormon history at the time and I was impressed with the power of starting a cult. And it struck me that if EAs continued tithing, ritualized somewhat, and enforced fecundity norms, the expected impact was likely enormous. The fact that this idea was actually surprisingly strong and seemed maximally disgusting to the type of person interested in EA was amusing to me. 

Despite not posting, I never doubted that if I did a good job and wrote it well, I would not be massively downvoted for posting such a disgusting idea. To use a cringe term, there was much "low decoupler" nature in EA Forum back then, and I would have expected counterarguments not downvotes provided my post was intelligent. This is now mostly dead.

habryka summarizes the areas Open Phil will blacklist an organization for funding:

You can see some of the EA Forum discussion here: https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures?commentId=RQX56MAk6RmvRqGQt 

The current list of areas that I know about are: 

    Anything to do with the rationality community ("Rationality community building")Anything to do with moral relevance of digital mindsAnything to do with wild animal welfare and invertebrate welfareAnything to do with human genetic engineering and reproductive technologyAnything that is politically right-leaning

There are a bunch of other domains where OP hasn't had an active grantmaking program but where my guess is most grants aren't possible: 

    Most forms of broad public communication about AI (where you would need to align very closely with OP goals to get any funding)Almost any form of macrostrategy work of the kind that FHI used to work on (i.e. Eternity in Six Hours and stuff like that)Anything about acausal trade of cooperation in large worlds (and more broadly anything that is kind of weird game theory)

And again, this is a blacklist not just a funding preference. It casts a pall on any organization that funds multiple projects and wants Open Phil funding for at least one of them.

If you "withdraw from a cause area" you would expect that if you have an organization that does good work in multiple cause areas, then you would expect you would still fund the organization for work in cause areas that funding wasn't withdrawn from. However, what actually happened is that Open Phil blacklisted a number of ill-defined broad associations and affiliations, where if you are associated with a certain set of ideas, or identities or causes, then no matter how cost-effective your other work is, you cannot get funding from OP. 

With the exception of avoiding rationalists (and can we really blame Moskovitz for that?) the Open Phil blacklist is a list of things that impinge on the sacred: 

Digital minds impinges on our intuitions of having an immaterial soul that is still powerful even in secular western culture.

Wild animal welfare impinges on the sacred notion of a benevolent mother nature.

Human genetic engineering impinges on the sacred notion of human equality.

And "anything that is politically right-leaning" impinges on the sacred notion that Ezra Klein is correct about everything.

Disincentivizing research into the welfare of digital minds alone can undo any good many times over. There are consequences to lobotomizing one of the few cultures with good enough epistemology to think critically about sacred issues. And much good can be undone. It's plausible to me enough good can be undone that one would have been better off buying yachts. 

But regardless of Moskovitz's desire to keep his hands dirty, we are still left with the question of how one funds taboo-but-effective interventions given the obvious reputational risks. 

I think there may be a sort of geographical reputational arbitrage that is under-explored. Starting with a less-controversial example, east Asian countries seem to have less-parochial notions of human and machine consciousness. And it plausibly has less political valence there. Raising or deploying funds in Japan and Korea and perhaps even China if possible might be worth investigating. 

In the case of engineering humans for increased IQ, Indians show broad support for such technology in surveys (even in the form of rather extreme intelligence enhancement), so one might focus on doing research there and/or lobbying its people and government to fund such research. High-impact Indian citizens interested in this topic seem like very good candidates for funding, especially those with the potential of snowballing internal funding sources that will be insulated from western media bullying.

As for wild-animal welfare I don't have any ideas about similar arbitrages but I think it may be worth some smarter minds' time to think over this question for five minutes.

And in terms of "Anything right-leaning" a parallel EA culture, preferably with a different name, able to cultivate right-wing funding sources might be effective. And one might focus on propaganda campaigns to try to make right-coded-but-good ideas into the left-coded ideas instead. There is an obvious redistributional case for genetic engineering (what is a high-IQ if not unearned genetic privilege?) for example that can be maybe be framed in a left-wing manner.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

有效利他主义 禁忌 资金 社会价值观
相关文章