少点错误 03月14日
Should AI safety be a mass movement?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了在AI存在风险的沟通中,应更侧重于向决策者还是公众传达信息。作者倾向于向决策者沟通,认为这样更易使问题保持非党派性和理性,避免公众分裂带来的不良后果。同时也提到了一些反观点。

🎯向较窄受众传达使问题更可能保持非党派性和理性。

💡公众易关注日常事务,对AI风险的关注较晚。

📋游说决策者比改变公众意见可能更快捷有效。

Published on March 13, 2025 8:36 PM GMT

When communicating about existential risks from AI misalignment, is it more important to focus on policymakers/experts/other influential decisionmakers or to try to get the public at large to care about this issue?[1] I lean towards it being overall more important to communicate to policymakers/experts rather than the public. However, it may be valuable for certain individuals/groups to focus on the latter, if that is their comparative advantage. 

Epistemic status 

The following is a rough outline of my thoughts and is not intended to be comprehensive. I'm uncertain on some points, as noted, and I am interested in counterarguments. 

Reasons for x-risk to be a technocratic issue rather than a public conversation

    Communicating to a narrower audience makes it more likely that the issue can remain non-partisan and not divisive. Conversely, if the public becomes divided into "pro-safety" and "anti-safety" camps, potentially among partisan lines, then:
      Partisan polarization will make it harder to cooperate to reduce risk with the "anti-safety" party and voters/groups aligned with it.It will also make it more likely that AI policy and strategy will take place within the broader ideological paradigm of the pro-safety party; any legitimate concerns that don't fit within this paradigm are less likely to be addressed, compared to if AI safety is apolitical.The debate will become less rational.[2]
        There will be negative epistemic consequences from persuading policymakers as well ("Politics is the mind-killer"), but my sense is that it would be much harder to speak honestly and avoid demagoguery when trying to convince large masses of people. There are all kinds of misconceptions and false memes that spread in popular political debates, and it seems easier to have a more informed conversation if you're talking to a smaller number of people.
    It's hard to persuade people to believe in and care about a risk that feels remote / hard to understand / weird. Most people tend to focus on things that affect their day-to-day lives, so they are only likely to care about x-risk once harms from AI have become concrete and severe. This may not happen before it is too late.[3] Given this uncertainty, it seems better not to rely on a strategy that will mostly only work if we are in a soft-takeoff scenario.Voters' opinions will influence policy to some degree, but it is not obvious that persuading voters is a more effective method of change than lobbying policymakers directly (even if many voters can be persuaded, in spite of point 2),[4] and it seems like lobbying policymakers is quicker than changing the opinions of the public at large, which is important if timelines are short.

Counterpoints

  1. ^

    By "the public," I mean average voters, not people on LessWrong.

  2. ^

    Regardless of whether the division aligns with partisan lines.

  3. ^

    E.g. Toby Ord, "The Precipice: Existential Risk and the Future of Humanity" (2020), p. 183: "Pandemics can kill thousands, millions, or billions; and asteroids range from meters to kilometers in size. [...] This means that we are more likely to get hit by a pandemic or asteroid killing a hundredth of all people before one killing a tenth, and more likely to be hit by one killing a tenth of all people before one killing almost everyone. In contrast, other risks, such as unaligned artificial intelligence, may well be all or nothing."

  4. ^

    See e.g. here.

  5. ^

     Regarding asteroids, see Ord (2020), p. 72.

  6. ^

    I don't have a formal source for this, just my observations of politics and others' analysis of it.

  7. ^

    See, e.g. here and here.

  8. ^

    Backlash against protests in 1968 has been said to have led to the election of Richard Nixon. See also here.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI风险 决策者 公众 沟通策略
相关文章