少点错误 19小时前
Astronomical Waste & Conscientious Objection
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在人工智能发展过程中,安全主义者与开发者之间的潜在冲突。作者不确定AI发展的具体威胁程度,但承认许多人将此视为一场“战争”。在这种框架下,安全主义者被认为需要动员起来,部分人甚至直接致力于减缓AI的进展。文章分析了不加入安全主义队伍的可能动机,包括规避不受欢迎、报酬低或“尴尬”工作的成本,以及对技术进步可能被中断的担忧,认为技术进步是解决人类苦难的唯一途径。作者将此与历史上的“逃避兵役者”和“良心拒服兵役者”进行类比,并提出“天文浪费”驱动的良心拒服兵役者可以通过“差异化技术发展”来参与“战争”,即加速对齐等“明显有益”的技术。最后,文章反思了即使不处于冲突状态,也应如何反思和承担对那些承担更艰难角色者的责任,并强调了“聪明地工作”而非仅仅“努力工作”的趋势。

🛡️ AI安全与发展存在潜在冲突:文章提出一种观点,即将AI领域的安全主义者与开发者视为对立双方,安全主义者需要积极行动以减缓AI发展,而开发者则可能担忧技术进步被阻碍。

⚖️ 良心拒服兵役者与AI安全工作的类比:作者将不愿直接减缓AI发展的人比作历史上的“良心拒服兵役者”,并探讨了他们如何通过“差异化技术发展”——例如加速AI对齐技术——来为“安全阵营”做出贡献,同时避免扼杀创新。

🧠 对“天文浪费”的担忧与技术进步的价值:文章强调了技术进步的宝贵和脆弱性,以及担忧任何中断都可能导致人类继续遭受苦难和不适,这构成了不参与安全主义行动的一个重要道德考量。

🤝 责任与分工的思考:即使不处于直接的AI冲突中,作者也呼吁反思在不同角色分工下的道德责任,特别是对那些承担更艰难、同样关键角色的个体的责任,并强调了“聪明地工作”的时代背景。

Published on August 2, 2025 10:37 PM GMT

[crosspost]

First, I want to say that I’m actually pretty uncertain about how much of a threat different degrees of AI development pose. But I know lots of people think ‘rationalists’ should act as though we’re in a conflict—safetyists against developers. Assuming that’s the case, I want to see where their assumptions lead re worthiness of working on different things.

In this world, safetyists need to mobilize against developers, with many safetyists working directly on slowing down AI progress. That’s the force that needs to be mobilized.

One might have many motivations for not wanting to join that force. One might not want to incur the costs to self from working on something that’s unpopular, poorly/non-remunerated, fringe, and/or ‘cringe’.

Conditioning on conflict being the right frame here, I see the main moral motivation for not joining that force as stemming from Astronomical Waste. God, technological progress is so precious and fragile. I’m terrified of doing something to disrupt it. People will suffer and die—or just continue living deeply uncomfortable lives—if we slow / shut down the only means, technological progress, that we have to change that.

Back to the ‘mobilization’ metaphor. In historical cases of military mobilization, you have ‘draft-dodgers’ who supposedly just don’t want to enlist for self-interested reasons. And then you have ‘conscientious objectors’, who claim exemption on moral / religious grounds. It was fairly shameful to be a conscientious objector during WW1 / WW2 — there was peer pressure and accusations of free-riding, cowardice, etc. etc..

How have conscientious objectors kept face in the long view of history? By doing work at least as miserable and critical as that of soldiers on the front line. I knew of railway workers in Britain, but today I learned of the starvation study, using conscientious objectors as guinea pigs, conducted at the University of Minnesota in 1944. That’s some pretty intense service!

Assuming safetyists are obligated to frustrate AI development—what kind of work can Astronomical-Waste-driven ‘Conscientious Objectors’ get up to? To me, the answer lies in differential technological development—accelerating technologies, esp. in alignment, that are more ‘clearly very good’ than dangerous. This way, you’re not committing the victimful sin of quashing human innovation, but you are still participating in the war effort.

There’s some cold water here, which is that Britain/America probably needed far more soldiers than conscientious objectors to win their respective conflicts. I recently watched a talk suggesting “everyone wants to be the good cop—but a pressure movement needs a whole lot more bad cops than good.”1

Independent of whether we’re in an AI-conflict-world—after all, we’re in triage every second of every day—what other lessons can we take from the legacy of conscientious objection? If I assume a more ‘comfortable’ role, I want to reflect on why, as well as feel, and act on, my duty towards people who take less comfortable, equally critical, roles. I’ve always felt uncomfortable about the sheen of ‘comparative advantage’ . In the case of conscientious objectors, honoring this duty often meant working so hard their jobs couldn’t be deemed more ‘comfortable’ by the end.2

 

1

There are caveats—we’re entering a world where one can work smart, not hard. We’re moving away from a world of military / political numbers towards one of drones, cyber-warfare, and so on. But I think this still matters.

2

There is nuance here re burnout, etc.. I take this as assumed & want to bite hard bullets today.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 人工智能发展 良心拒服兵役 技术伦理 AI对齐
相关文章