少点错误 06月11日 22:12
Difficulties of Eschatological policy making [Linkpost]
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了与政策制定者沟通人工智能(AI)风险的复杂性。作者认为,由于对AI风险的担忧往往涉及世界末日般的情景,这使得政策制定者难以理解和应对。文章描述了政策制定者面临的日常压力,以及AI风险倡导者与他们的沟通困境。文章还分析了政策制定者的行动在解决AI风险方面的局限性,以及这种情况下政策制定者可能获得的反馈。总的来说,文章强调了在处理AI政策时,沟通、理解和优先级的巨大挑战。

🤯 **末日情景下的AI风险**: 文章指出,许多关注AI风险的人相信,如果管理不当,强大的AI系统可能会导致世界末日,这种“末日论”的视角使得政策制定者难以理解和重视。

💼 **政策制定者的日常压力**: 政策制定者需要处理包括地缘政治冲突、国际贸易变化、经济压力和公共安全威胁在内的多重挑战,这使得他们很难优先考虑看似遥远的AI风险。

🗣️ **AI倡导者与政策制定者的沟通障碍**: AI风险倡导者与政策制定者之间的沟通常常受阻,因为双方对问题的理解和优先级存在巨大差异。政策制定者可能难以理解AI专家提出的紧急性和复杂性,并且难以获得AI专家对他们行动的正面反馈。

⏳ **政策行动的局限性与反馈**: 即使政策制定者采取行动,其效果也可能难以衡量,并且可能被AI专家视为仅是争取了更多的时间,而非解决了根本问题。这种反馈循环进一步加剧了政策制定者的挫败感。

Published on June 11, 2025 2:12 PM GMT

Jack Clark has a very important post on why it's so difficult to communicate with policymakers on AI risk, and the reason is that AI risk (and most discussions of AGI/ASI) is basically eschatological, in that it involves the end of the world/technology that looks like magic being developed by AIs, and this creates a very difficult landscape for policy makers.

In particular, each group of experts considers the other group of experts to be wildly incorrect, and there's little feedback on anything you do, and the feedback may be corrupted, and this explains a lot about why policymakers are doing things that feel wildly underscaled to deal with the problem of AI x-risk:

Eschatological AI Policy Is Very Difficult

A lot of people that care about the increasing power of AI systems and go into policy do so for fundamentally eschatological reasons – they are convinced that at some point, if badly managed or designed, powerful AI systems could end the world. They think this in a literal sense – AI may lead to the gradual and eventually total disempowerment of humans, and potentially even the death of the whole species.

People with these views often don’t recognize how completely crazy they sound – and I think they also don’t manage to have empathy for the policymakers that they’re trying to talk to.

Imagine you are a senior policymaker in a major world economy – your day looks something like this:

    There is a land war in Europe, you think while making yourself coffee.

    The international trading system is going through a period of immense change and there could be serious price inflation which often bodes poorly for elected officials, you ponder while eating some granola.

    The US and China seem to be on an inexorable collision course, you write down in your notepad, while getting the car to your place of work.

    There are seventeen different groups trying to put together attacks that will harm the public, you say to yourself, reading some classified briefing.

    “Something akin to god is coming in two years and if you don’t prioritize dealing with it right now, everyone dies,” says some relatively young person with a PhD and an earnest yet worried demeanor. “God is going to come out of a technology called artificial intelligence. Artificial intelligence is a technology that lots of us are developing, but we think we’re playing Russian Roulette at the scale of civilization, and we don’t know how many chambers there are in the gun or how many bullets are in it, and the gun is firing every few months due to something called scaling laws combined with market incentives. This technology has on the order of $100 billion dollars a year dumped into its development and all the really important companies and infrastructure exist outside the easy control of government. You have to do something about this.”

The above is, I think, what it’s like being a policymaker in 2025 and dealing with AI on top of everything else. Where do you even start?

[Intermission]

Let us imagine that you make all of these policy moves. What happens then? Well, you’ve mostly succeeded by averting or delaying a catastrophe which most people had no knowledge of and of the people that did have knowledge of it, only a minority believed it was going to happen. Your ‘reward’ insofar as you get one is being known as a policymaker that ‘did something’, but whether the thing you did is good or not is very hard to know.

The best part? If you go back to the AI person that talked to you earlier and ask them to assess what you did, they’ll probably say some variation of: “Thank you, these are the minimum things that needed to be done to buy us time to work on the really hard problems. Since we last spoke the number of times the gun has fired has increased, and the number of bullets in the chamber has grown.”
What did I do, then? You ask.
“You put more chambers in the gun, so you bought us more time,” they say. “Now let’s get to work”.

I write all of the above not as an excuse for the actions of policymakers, nor as a criticism of people in the AI policy community that believe in the possibility of superintelligence, but rather to instead illustrate the immense difficulty of working on AI policy when you truly believe that the technology may have the ability to end the world. Most of the policy moves that people make – if they make them – are going to seem wildly unsatisfying relative to the scale of the problem. Meanwhile, the people that make these moves are going to likely be juggling them against a million other different priorities and are going to be looking to the AI experts for some level of confidence and validation – neither of which are easily given.

Good luck to us all.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI风险 政策制定 沟通障碍
相关文章