少点错误 前天 00:52
Moral Obligation and Moral Opportunity
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了在面对重大道德议题,如AI安全等问题时,两种不同的思考框架:道德义务和道德机会。道德义务感可能会给人带来压力和内疚感,而道德机会则强调个人选择的自由和探索的可能性。文章认为,将重大议题视为道德机会,而非道德义务,更有利于激发人们的积极性和创造力,从而更有效地应对挑战。同时,也更容易构建一个更加积极和健康的文化氛围。

💡**道德义务框架**:在这种框架下,人们可能因为没有从事“超级道德”的事情而感到内疚或受到负面评价,从而产生压力和焦虑。

🌟**道德机会框架**:这种框架强调个人选择的自由,认为每个人都有权选择自己认为重要的事情,即使那不是拯救世界的大事,也能被接受和尊重。

🌈**情感差异**:文章强调,虽然两种框架在表面上可能相似,但它们在情感上会产生巨大差异。道德机会框架更能激发探索欲和积极性,而道德义务框架可能导致倦怠和抵触。

💰**机会成本**:文章用“捡5美元”和“参加酷炫的会议”类比,说明参与道德事业也是一种机会,人们有权选择接受或拒绝,而不应感到道德上的压力。

Published on May 14, 2025 4:42 PM GMT

This concept and terminology was spawned out of a conversation a few years ago with my friend Skyler. I finally decided to write it up. Any mistakes here are my own.


Every once in a while, I find myself at yet another rationalist/EA/whatever-adjacent social event. Invariably, someone walks up to me:

Hi, I'm Bob. I'm pretty new here. What do you work on?

Hi Bob, I'm Alice. I work on preventing human extinction from advanced AI. If you'd like, I'm happy to talk a bit more about why I think this is important.

Bob is visibly nervous.

Yeah, I've already gotten the basic pitch on AI safety, it seems like it makes sense. People here all seem to work on such important things compared to me. I feel a sort of moral obligation to help out, but I feel stressed out about it and don't know where to start.

Alice is visibly puzzled, then Bob is puzzled that Alice is puzzled.

I'm not sure I understand this "moral obligation" thing? Nobody's forcing me to work on AI safety, it's just what I decided to do. I could've chosen to work on music or programming or a million other things instead. Can you explain what you mean without using the word "obligation"?

Well, things like "I'm going to save the world from extinction from AI" or "I'm going to solve the suffering of billions of farmed animals" are really big and seem pretty clearly morally right to do. I'm not doing any of those things, but I feel an oblig- hmm... I feel like something's wrong with me if I don't work on these issues.

I do not think anything is necessarily wrong with you if you don't work on AI safety or any other of these EA causes. Don't get me wrong, I think they're really important and I love when there are more people helping out. I just use a very different frame to look at this whole thing.

Before running into any of these ideas, I was going about my life, picking up all sorts of opportunities as I went: there's $5 on the ground? I pick it up. There's a cool conference next month? I apply. So when I heard that I plausibly lived in a world where humans go extinct from AI, I figured that owning up to it doesn't make it worse, and I looked at my opportunities. I get the chance to learn from a bunch of smart people to try and save the world? Of course I take the chance, that sounds so cool.

My point here is that you're socially and emotionally allowed to not take that opportunity, just like you're allowed to not pick up $5 or not apply to conferences. I think it's probably good for people to pick up $5 when they see it or help out with AI safety if they can, but it's their opportunity to accept or decline.

This feels like approximately the same thing as before? Under the moral obligation frame, people look at me negatively if I don't do the Super Highly Moral thing, and under the moral opportunity frame you tell me I have a choice but only look at me positively if I do the Super Highly Moral thing? Isn't this just the same sort of social pressure, but you say something about respecting personal agency?

Well, I'm not actually that judgmental, I'll look at you pretty positively unless you do something Definitely Wrong. But that's not the point. The point is that these two framings make a huge emotional difference when used as norms for a personal or group culture. Positive reinforcement is more effective than positive punishment because it tells someone exactly what to do instead of just what not to do. Reinforcement is also just a more emotionally pleasant stimulus, which goes a long way.

Let's look at this a different way: say that my friend Carol likes to watch TV and play video games and not much else. The moral obligation frame looks at Carol and finds her clearly in the moral wrong, lounging around while there are important things to be doing. The moral opportunity frame looks at her and sees a person doing her own things in a way that doesn't hurt other people, and that's morally okay.

These two frames still seem weirdly similar, like in the "moral opportunity" frame you just shifted all options to be a bit more morally good so that everything becomes okay. But ultimately both frames still think working on saving the world is better than watching TV. I see what you're saying about emotions, but this still feels like some trick is being played on my sense of morality.

That's a reasonable suspicion. I think the math of this sort of shifting works out, I really don't think there's any trick here. Ultimately it's your choice how you want to interface with your emotions. I find that people are much more likely to throw their mind away when faced with something big and scary that feels like an obligation, compared with when they feel like an explorer with so many awesome opportunities around.

It's sad to live in a world that could use so much saving, and dealing with that is hard. There's no getting around that emotional difficulty except by ignoring the world you live in. Conditional on the world we live in, though, I'd much rather live in a culture that frames things as moral opportunities than moral obligations.


I frame this as a conversation with a newcomer, but I also see the moral obligation frame implicit in a lot of experienced EAs, especially those who are going through some EA burnout. The cultural pieces making up this post mostly already existed across the EA-sphere (and I've tried to link to them where possible), but I haven't seen them collected in this way before, nor have I seen this particular concept boundary drawn.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

道德义务 道德机会 AI安全 个人选择
相关文章