少点错误 07月26日 10:53
The White House is directing agencies to prepare for risks from AI. What are you going to do about it?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章指出,当前许多AI安全领域的沟通工作集中于向美国政府传达已知的风险和优先事项,例如国家安全风险、AI可解释性和控制、以及AI评估生态系统。然而,作者认为,政府近期发布的AI行动计划已明确将这些领域列为重点。因此,与其重复呼吁,不如将精力集中在如何有效执行这些已确定的政策优先事项上。文章建议AI领域的专家深入了解政策运作机制,为政府在AI安全框架、路线图、工具包以及AI保证标准等方面提供具体、可行的实施方案,并通过公开渠道(如社交媒体、智库报告)分享这些想法,从而在AI政策的实际落地中发挥更大的边际效益。

🎯 聚焦政策执行而非重复呼吁:文章强调,与其向美国政府提出已知的AI安全风险和优先事项(如国家安全风险、AI可解释性与控制、AI评估生态系统),不如将重点放在如何有效执行政府已发布的AI行动计划。这种策略更具建设性,能直接推动AI安全政策的落地。

🧠 深入理解政策机制是关键:作者指出,AI领域的专家普遍缺乏对政府运作机制的了解,例如区分CISA和NSC等部门。要有效参与AI政策制定,需要先学习政策工具,理解不同政府机构的角色和职能,才能提出切实可行的建议。

💡 提出具体实施方案以贡献价值:文章鼓励AI专家深入研究AI行动计划中的具体议程,如“推广安全设计AI技术和应用”,并为国防部(DOD)、国家标准与技术研究院(NIST)、商务部(DOC)及国家情报总监办公室(ODNI)等部门提供关于完善AI框架、路线图、工具包以及AI保证标准的具体建议和实施方案。

🚀 通过智库和学术渠道传播见解:作者建议,AI领域的专家可以将自己关于AI政策执行的见解,通过Twitter/X、Substack或更理想化的方式——与知名智库、报纸或学术期刊合作发布白皮书——进行传播,从而影响和指导政府的政策实践。

🤝 外部专家可为政府提供宝贵支持:鉴于大量AI技术人才位于政府体系之外,外部专家的专业知识和创新想法对于政府在AI安全领域的实践至关重要。通过外部的专业建议,可以帮助政府更高效地解决AI安全中的复杂挑战。

Published on July 26, 2025 2:46 AM GMT

Quite a lot of people are talking about doing "AI Safety Communications" to convince the US government to do things like take "National Security Risks in Frontier Models" seriously, invest in "AI Interpretability and Control", and build an "AI Evaluations Ecosystem". But uh... 

The White House just released an AI Action plan, directing agencies to (among other things):

I haven't seen much discussion in this community about the recent AI Action Plan; maybe I missed it. (All I've seen is Zvi declaring it is "pretty good".)

In the absence of high profile discussion around the AI Action Plan,  it kind of feels like people are mobilizing to try to tell the US Government stuff it has not just heard, but has already made a clear priority to work on

There's nothing wrong with doing more comms, and if you're especially skilled in doing massive public outreach, then I guess do what you're good at. 

But I feel like the place to have big marginal impact right now is to dive into areas that the Trump admin has already ordered the government to work on, and come up with great new ideas on how to implement these priorities as effectively as possible. These are complicated questions that people aren't sure how to solve; getting a bunch of smart people who are familiar with frontier models to help think things through seems like it would be super useful. 

What would this concretely look like?

First, learn about policy levers. Random example, learn what the difference is between CISA and the NSC. I've been a bit shocked by how little super smart AI people know about government.

Second: think really hard about how to implement the AI Action priorities, and come up with some solid plans. 

Third: Publicize your ideas via Twitter/X, Substack, and/or (most ideally) white papers with a respected think tank, newspaper, or academic journal.

To be super concrete, here's one of the policy agendas from the AI Action Plan for "Promote Secure-By-Design AI Technologies and Applications":

Led by DOD in collaboration with NIST at DOC and ODNI, continue to refine DOD’s Responsible AI and Generative AI Frameworks, Roadmaps, and Toolkits. 
Led by ODNI in consultation with DOD and CAISI at DOC, publish an IC Standard on AI Assurance under the auspices of Intelligence Community Directive 505 on Artificial Intelligence.

Seems like there's good work to be done in: 

Of course the agencies will be working on this internally. But it's the role of think tanks, academia, and broader civil society to help generate great policy ideas to help guide what the government tries to do. 

Considering how much AI technical talent sits outside of government, this seems like an area where having external experts chime in could be especially valuable. 



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 AI政策 政府沟通 政策执行 AI风险
相关文章