少点错误 03月07日
How Can Average People Contribute to AI Safety?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了普通人在面对人工智能(AI)带来的潜在风险时,如何才能发挥积极作用,而非仅仅作为旁观者。文章指出,尽管AI领域的决策权掌握在少数人手中,但普通人可以通过提升认知、传播信息、参与社区讨论、参与技术安全研究以及捐款等方式,为AI安全贡献力量。文章鼓励大家关注AI发展,积极参与讨论,并为相关研究提供支持,共同塑造AI的未来。

📚 提升认知:通过阅读AI安全领域的相关资料,例如AI Safety Fundamentals课程、aisafety.info网站以及相关书籍,了解AI风险的基本概念和原理,避免在讨论中产生误解或散布不准确的信息。

📢 传播信息:积极与朋友、家人讨论AI的未来,或在社交媒体上分享相关观点。尽管AI风险可能在未来才会显现,但尽早展开讨论有助于形成更成熟的思考和观点,并促进不同视角的交流。

💬 参与社区:加入LessWrong或AI Alignment Forum等社区,阅读帖子、发表评论或撰写文章,与其他关注AI安全的研究者和从业者交流,为AI安全研究提供反馈和建议。

🔬 参与技术安全研究:通过参与AI评估(evals)和文献综述等方式,为技术AI安全研究做出贡献。AI评估可以帮助我们更好地了解当前AI模型的能力和风险,而文献综述则可以将现有的研究成果进行整理和归纳。

💰 捐款支持:鉴于目前全球在AI安全领域的投入相对较少,通过捐款支持相关研究机构和项目,可以为AI安全事业提供重要的资金支持。

Published on March 6, 2025 10:50 PM GMT

Introduction

By now you've probably read about how AI and AGI could have a transformative effect on the future and how AGI could even be an existential risk. But if you're worried about AI risk and not an AI researcher or policymaker, can you really do anything about it or are most of us just spectators, watching as a handful of people shape the future for everyone?

I recently came across a paragraph in one of Marius Hobbhahn's recent blog posts that partly inspired me to write this:

Most people are acutely aware of AI but powerless. Only a tiny fraction of the population has influence over the AI trajectory. These are the employees at AI companies and some people in the government and military. Almost everyone else does not have the chance to meaningfully do something about it. The general sentiment in the population is that AI is going to alter the world a lot (similar to the societal awareness during the peak of the Cold War). Most people are aware of their powerlessness and express frustration about being bystanders to their own fate. AI is altering the world in so many ways at the same time that there is not a lot of targeted political action despite large shifts in popular sentiment.

Hobbhahn’s post is set in the late 2020s. But it seems worthwhile to have this post ready now for people who are concerned now or in case there is a flood of new users to LessWrong over the next several years trying to make sense of AI’s rapid progress.

It might simply be the case that AGI, like many powerful technologies, is shaped by a small group of influential people. In the worst case, as the quote describes, the rest of us would be mere bystanders watching passively as AI transforms the world around us. But it just seems more fair that if a technology is about to affect everyone's lives in significant ways, then ideally everyone should have an ability to positively influence the future of AI. As time goes on, there's increasing awareness of AGI and AGI risks but what's the point if this information is not actionable?

Of all the ways AI could shape the future, I’ll focus here on what I believe is the most important: the existential risk from AI. If AI is an existential threat to the existence of humanity, then it's everyone's problem whether they know it or not and it's in everyone's interest to take actions to reduce the risk. So what can regular people do?

Target audience

I’lll have to make a few more assumptions. First, when I mean normal people, I mean someone like the median LessWrong reader since that's probably who will read this and for now, I think the problem addressed by this post mainly applies to that kind of person.

I think there are three kinds of people in the world in the context of AGI: people like Dario Amodei who are highly aware of AGI and its risks but also have an enormous amount of leverage in shaping its future, people like you and me who may be worried about AGI but are otherwise relatively average and uninfluential, and the actual median person from the population who has both limited awareness of AGI and also a limited ability to affect its development. This post is aimed at the middle group.

Ideal ways to contribute to AI safety

Before we focus on what normal people could do, let's briefly consider what you would ideally do if you wanted to contribute to AI safety and reduce AGI x-risk.

The 80,000 Hours guide on AI risk has two main recommended paths for decreasing existential risk from AI: technical AI safety and AI governance and recommends following a career in one of these areas.

For technical AI safety, there are at least three highly impactful paths:

To contribute to AI governance you could:

If you can follow one of these career paths that's great but I don't think it's realistic to expect an average person to become the next Neel Nanda or Paul Christiano in order to contribute to AI safety.

What can average people do?

So what can the average person do to help?

As mentioned in the first paragraph, the worst case scenario is that average people can't have any meaningful influence on AGI and AGI x-risk reduction and are merely spectators watching the future unfold. But I think that's too pessimistic. The following sections outline practical ways that people like the median LessWrong reader can contribute to AI safety.

Become informed

If you're interested in AI safety or AI risk, you should definitely first educate yourself on the topic if you haven't already and continuously learn about the field.

Some good resources include the AI Safety Fundamentals course materials, aisafety.info, and some of the AI safety books such as The Alignment Problem, Human Compatible, and Superintelligence.

This advice may sound basic but many AI existential risk ideas are not intuitive (e.g. the orthogonality thesis). Understanding these concepts is crucial to avoid adding confusion to the broader conversation or embarrassing yourself. I lose a little faith in humanity every time I see a bad AI risk take on Twitter.

It's absolutely fine to disagree with claims in the field, but you should first familiarize yourself with the foundational concepts to avoid common misunderstandings. Learning ML also seems useful.

Action item: read a blog post about AI safety such as the AI alignment Wikipedia page.

Spread the message

Since reducing existential risk is extremely valuable and superintelligent AI is one of the top existential risks facing humanity in the 21st century (according to the book The Precipice), then we should be talking about it more. While many young people are concerned about the future, these concerns often focus on climate change even though advanced AI is a greater existential risk.

Even if AI x-risk becomes a problem in several decades (though AI progress will probably be faster than that), starting the conversation now is essential for developing thoughtful, mature discussions. A diversity of perspectives is both valuable and necessary.

Try bringing up the future of AI with friends or family. If that feels uncomfortable, consider joining online discussions on platforms like LessWrong.

Action item: talk about AGI or AI safety with friends or write about it online. Write a Tweet about the future of AI or AI safety.

Become a member of LessWrong or the AI Alignment Forum

Another great way to contribute to AI safety is by engaging with the LessWrong or AI Alignment Forum communities which often discuss AI safety. You can start by reading posts, leaving useful comments, or writing your own posts on these forums.

These forums are highly accessible because anyone from around the world can contribute. Additionally, unlike regular forums, many users are professional AI safety researchers working in academia or at top AI labs like Anthropic, who share their latest work here alongside publishing papers. This means your comments could provide invaluable feedback on cutting-edge AI safety research.

You might also consider writing a post. Since text is high dimensional and these communities are relatively small, there's a good chance that if you don't post a specific idea or insight, then no one else ever will.

Action item: comment on a recent LessWrong or Alignment Forum post on AI safety or write a blog post on AI safety.

Ways to contribute to technical AI safety research

If you’re interested in contributing to technical AI safety research, two accessible entry points are AI evaluations (evals) and literature reviews. These forms of work don’t necessarily require advanced technical skills and can still provide valuable insights for the field.

Action item: run AI evals on an AI safety dataset such as the Situational Awareness Dataset and evaluate some frontier models. Write a literature review or LessWrong blog post on a specific AI safety concept.

Donate money

The world spends about $100 million every year on technical and governance work designed to reduce existential risk from AI. While that might sound like a lot, it isn't much on a global level. By some estimates, the world spends hundreds of billions on climate change mitigation which is three orders of magnitude (1000x) more than AI safety. Collective financial support from many individuals could significantly influence the trajectory of AI safety work, especially for projects overlooked by large funders.

If you're interested in donating to AI safety, consider donating to the Long-Term Future Fund or one of the AI safety projects on Manifund. Many projects on Manifund have funding targets in the thousands or tens of thousands of dollars, making it possible for individual donors to have a significant impact.

Action item: donate $100 to the Long-Term Future Fund or a Manifund AI safety project.

Protest

There are many reasons to believe that racing to build advanced AI systems is unwise.

While accelerating AGI development might bring its benefits sooner, it also introduces significant risks. A rapid race to build AGI could mean fewer resources such as time devoted to AI safety and less time for society to adapt -- factors that likely increase existential risk. Additionally, there's evidence that the American public is wary of advanced AI and prefers slow and safe development instead of rapid progress: a poll found that 72% of Americans prefer slowing down the development of AI whereas just 8% would prefer accelerating development.

History shows that public pressure can influence the course of high-stakes technological development. For example, in 1982, one million people gathered in New York City to protest the nuclear arms race and call for disarmament, demonstrating the power of collective action. Peaceful protesting is considered to be a key tool for citizens in democratic countries to express concerns to their government and advocate for change.

Organizations like Pause AI advocate for an international treaty to halt AI training runs larger than GPT-4 until their safety can be assured. A treaty could help resolve the collective action problem where the world would be collectively better off by slowing down AI progress but fails to do so because of individual incentives. Pause AI regularly organizes protests in cities around the world, offering opportunities for anyone to get involved.

Action item: attend a Pause AI protest.

Don't cause harm

Finally we should always aim to at least not cause harm. Avoid actions that might be net negative like accelerating AGI development, violence, lying or any other actions that could cause harm or undermine AI safety or society.

Action item: N/A.

Could average people really have an impact?

In many fields, such as scientific research, impact follows a long-tailed distribution, where top contributors have a much larger impact than average contributors. This suggests that most people are unlikely to have a significant impact. However, it's important to note that this difference is relative. Although some people can contribute much more than others, the absolute impact of the average contributor could still be large.

Since AI safety is important and neglected, it's possible for many people, including average people, to have a large positive absolute impact. Furthermore, although the impact of any individual may be limited, the sum of contributions from many individuals could be large.

A future worth fighting for

Finally we should be motivated to work on AI safety and strive for an amazing future.

Reducing existential risk is extremely important but AI safety is about more than just preventing the destruction of the status quo.

Instead what's at stake is an amazing future with a standard of living potentially much better than today. Here's a quote from Machines of Loving Grace by Dario Amodei which describes how good the world could be if advanced AI is developed in a way that is beneficial to humanity:

But it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it.

Related posts



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 AGI风险 普通人参与 技术安全 信息传播
相关文章