Published on August 2, 2025 10:37 PM GMT
First, I want to say that I’m actually pretty uncertain about how much of a threat different degrees of AI development pose. But I know lots of people think ‘rationalists’ should act as though we’re in a conflict—safetyists against developers. Assuming that’s the case, I want to see where their assumptions lead re worthiness of working on different things.
In this world, safetyists need to mobilize against developers, with many safetyists working directly on slowing down AI progress. That’s the force that needs to be mobilized.
One might have many motivations for not wanting to join that force. One might not want to incur the costs to self from working on something that’s unpopular, poorly/non-remunerated, fringe, and/or ‘cringe’.
Conditioning on conflict being the right frame here, I see the main moral motivation for not joining that force as stemming from Astronomical Waste. God, technological progress is so precious and fragile. I’m terrified of doing something to disrupt it. People will suffer and die—or just continue living deeply uncomfortable lives—if we slow / shut down the only means, technological progress, that we have to change that.
Back to the ‘mobilization’ metaphor. In historical cases of military mobilization, you have ‘draft-dodgers’ who supposedly just don’t want to enlist for self-interested reasons. And then you have ‘conscientious objectors’, who claim exemption on moral / religious grounds. It was fairly shameful to be a conscientious objector during WW1 / WW2 — there was peer pressure and accusations of free-riding, cowardice, etc. etc..
How have conscientious objectors kept face in the long view of history? By doing work at least as miserable and critical as that of soldiers on the front line. I knew of railway workers in Britain, but today I learned of the starvation study, using conscientious objectors as guinea pigs, conducted at the University of Minnesota in 1944. That’s some pretty intense service!
Assuming safetyists are obligated to frustrate AI development—what kind of work can Astronomical-Waste-driven ‘Conscientious Objectors’ get up to? To me, the answer lies in differential technological development—accelerating technologies, esp. in alignment, that are more ‘clearly very good’ than dangerous. This way, you’re not committing the victimful sin of quashing human innovation, but you are still participating in the war effort.
There’s some cold water here, which is that Britain/America probably needed far more soldiers than conscientious objectors to win their respective conflicts. I recently watched a talk suggesting “everyone wants to be the good cop—but a pressure movement needs a whole lot more bad cops than good.”1
Independent of whether we’re in an AI-conflict-world—after all, we’re in triage every second of every day—what other lessons can we take from the legacy of conscientious objection? If I assume a more ‘comfortable’ role, I want to reflect on why, as well as feel, and act on, my duty towards people who take less comfortable, equally critical, roles. I’ve always felt uncomfortable about the sheen of ‘comparative advantage’ . In the case of conscientious objectors, honoring this duty often meant working so hard their jobs couldn’t be deemed more ‘comfortable’ by the end.2
There are caveats—we’re entering a world where one can work smart, not hard. We’re moving away from a world of military / political numbers towards one of drones, cyber-warfare, and so on. But I think this still matters.
There is nuance here re burnout, etc.. I take this as assumed & want to bite hard bullets today.
Discuss