Published on July 22, 2024 1:44 AM GMT
Note: An initial proposal and some good discussion already existed on LW here. I’m spurring this here as a post instead of a comment due to length, the need for a fresh look, and a specific call to action.
Summary
I think a petition-style boycott commitment could reach critical mass enough to significantly shift OpenAI corporate policy.
I specifically think a modular petition allowing different users to choose which goalposts the target must cross to end their boycott would be a good method of coalition building among those concerned about AI Safety from different angles.
Postulates
- OpenAI needs some reform to be a trustworthy leader in the age of AI
- Zvi’s Fallout and Exodus roundups are good summaries, but the main points are:The NDA Scandal: forcing employees to sign atypically aggressive non-disparagement and recursive non-disparagement agreementsFiring Leopold Aschenbrenner for whistleblowing to the boardNot keeping safety compute commitmentsMultiple safety leaders leaving amid suggestions that the culture no longer respects safety (eg Jan Leike)
- Point 4 is arguably a bridge too far and could be left out or weakened (or made optional with a modular petition)
- Majority of Open AI revenue comes from individual $20/mo subscribers according to FUTURESEARCHOpenAI is likely sensitive to revenue at the moment given the higher interest rate environment and the recent focus on investors on the imbalance between AI company CapEx and revenue (eg this Sequoia report)OpenAI has shown to be fairly reactive to recent PR debaclesModern boycotts have a significant success rate at changing corporate policy
- Ethical Consumer details a few successful boycotts per year for the last few years. Boycotts facing large multinationals, especially publicly traded ones like Microsoft, have historically done particularly well
- Boycotting a paid subscription won't harm users much
- OpenAI’s latest model is available for free: paid perks are simply more usage, faster speedSwitching to Claude is easy and Sonnet 3.5 is better
- Polls substantial bipartisan majorities of Americans would rather “take a careful controlled approach” than “move forward on AI as fast as possible”
Arguments against and some answers/options
- This unfairly singles out OpenAI
- OpenAI likely the worst offender and has the most recent negative PR to galvanize supportOpenAI is seen as the leader by the public. Other labs will follow once one company commits, or be seen publically as not caring about safety
- This is a most concrete and agreeable set of demands, and sets a precedent that the public is watching and willing to actA modular petition with different opt-in commitments, Right to Warn demands among them, could create a powerful coalition among those concerned about different aspects of AI Safety
- Ending enough subscriptions to make a dent in revenue, but moderate success even among those in tech could persuade engineers to not pursue work at OpenAI
- This may be why OpenAI has been so reactive to recent PR issues.
Conclusion
If, after feedback, I still think this is a good idea, I’d be interested in any advice or help in finding a place to host a commitment-petition, especially one with modular features to allow for commitments of different lengths and with different goalposts centered around the same theme.
Discuss