少点错误 1小时前
Good Ideas Aren't Enough in AI Policy
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文深入探讨了在AI政策领域,技术论证与政治操作之间的关键差异。作者指出,在硅谷,有说服力的技术论证足以吸引资金和人才,但在华盛顿,这仅仅是基本要求。真正重要的是论证的传递者、建立的信任关系、时机、受众以及背后联盟的支持。文章强调,AI安全治理需要更多具备游说思维的“治理人才”,他们理解政治是关于建立关系、赢得信任和组建联盟的复杂生态系统。作者以AI暂停法案为例,生动展现了从概念普及到最终投票过程中,游说、妥协和信息传递的微妙之处,并建议AI安全领域的从业者应更多关注“世俗化”议题,以建立信任并推动更广泛的政策进展,而非仅聚焦于“极端风险”。

💡 在AI政策领域,华盛顿的政治运作逻辑与硅谷的技术驱动模式截然不同。硅谷偏好于基于事实和技术可行性的论证,而华盛顿则更看重论证的传递方式、执行者的信誉、人际关系、时机以及背后联盟的支持,技术论证只是“入场券”。

🤝 AI安全治理亟需具备“游说思维”的人才,他们需要理解政治是一个复杂的关系和联盟生态系统,能够有效地与政策制定者沟通,建立信任,并为AI安全议题争取实际的政策成果,而非仅仅停留在理论研究和论证阶段。

🚦 政治过程可分为“概念社会化”和“实际投票”两个阶段。前者侧重于建立公众支持、转移思想边界和在政策圈植入观念,而后者则更为残酷,涉及游说、交易、人情和策略,即使概念再好,若无政治智慧也难以转化为法律。

⚖️ 关注“世俗化”的AI议题,如网络安全、虚假信息治理等,虽然可能看似对“极端风险”的直接影响较小,但却能有效建立政策制定者和公众的信任,为AI安全议题的推进打下更坚实的基础,是一种更具政治智慧和战略性的方法。

💬 简单的技术论证或学术文章不足以推动AI政策的落地。在政策制定者有限的注意力和资源下,论证的有效性取决于其传递者(ethos)的信誉和人脉,以及能否引起相关利益方(logos)和公众(pathos)的共鸣和支持。

Published on August 5, 2025 10:38 PM GMT

In Silicon Valley, a compelling technical argument can secure funding and talent. In Washington, a compelling argument is merely table stakes—what matters is who delivers it, what relationships and trust have been built, when, to whom, and with what coalition backing them up. We need more governance people who have that lobbyist mindset to get wins for AI safety.

This piece is for early-career policy people and technical people considering an AI policy career. My goal is to convey some intuitions about effective AI policy to more effectively steer talent.

People working in technical AI safety usually have faith in the value of good ideas. This makes sense in a tech world with clear pipelines for connecting any somewhat compelling pitch to billionaire venture capitalists who will throw money at you. 

There's a natural tendency to apply this thinking to politics. I remember thinking: if only I could get an hour with my state senator, I could convince them existential risk is important. Why hasn't anyone done this yet? In my mind, the lack of good policy outcomes meant that our ideas and arguments weren't good enough. That if we found enough avenues of conveying information—demos, expert testimonials, petitions—politicians would have to rationally understand the importance of AI safety.

This thinking can be a pitfall for many early-career governance people. Writing policy briefs and pumping out academic articles can feel orderly, rational, and persuasive, but it is not what gets legislation passed in Washington.

 

The Socialization Illusion

In my experience speaking to Congressional staffers, they are bright, curious, personable, and generally pro-humanity people. They are very good at making you feel heard, and after chatting with them you feel like they're on your side, fighting for you. Yet the socialization of ideas is much harder than it outwardly appears.

Lawmakers have strong antibodies against bullshit. They constantly get calls from constituents ranging from “I can’t pay my rent” to “can you fix the fucking pothole on my street” to attempts to embarrass them to general psychosis and conspiracy theories. They’re always on the lookout for hidden motives and self-serving intentions. 

Their bullshit detector is primed to avoid false positives—in other words, they won’t escalate an issue unless they’re absolutely sure it’s an important one. How do they determine that? They need credible authorities telling them that it’s a problem, people and institutions they trust to both tell the truth and not screw them over. Then they check with their interest groups and backers, test the waters of different coalitions, seeing if their proposal is going to gain or lose them support or cause dozens of lobbyists to descend on them. 

So when lawmakers don’t take existential risk or any of the manifold issues we care about seriously, it’s not (necessarily) because lawmakers are stupid or evil. They genuinely have a dizzying array of issues and a staggering set of interest groups to contend with. Their staff is limited. 

They have personal expertise and legislative pet issues, but it’s impossible for them to dedicate a lot of mental capacity to things as disparate as environmental policy in the Florida wetlands or the technical aspects of AI safety. To succeed, they must be unbelievably good at regulating what ideas cost them time, influence, and mental real estate. And in this paucity of attention, pathos and logos can help but it is ethos that rules.

 

A Simplified Model of Politics

A good case study of how politics works happened recently with the proposed 10-year AI Moratorium. Though outwardly rejected in the Senate by a margin of 99-1, the process was anything but straightforward. 

We can divide the legislative process into two phases: the socialization of ideas and the actual vote. In Phase 1, you build coalitions, gain popular support, shift the Overton Window, implant ideas in the DC sphere, perhaps make a big enough fuss that some candidates add a blurb to their platform about your given issue. There is a purity to this process—good arguments, distribution, and ideas are really helpful!

Phase 2 is ruthless. If you’ve read Dwarkesh Patel’s article on Lyndon B Johnson (which is a fascinating read!), you’ll know this is the hour of cajoling, blackmail, favors, charisma. You can perfect your socialization period, but without enough pragmatism and will to win in Phase 2, your legislation will probably not get passed. 

 

The Moratorium: A Drama

Back to the AI moratorium. In the last 72 hours before the climactic vote, long past the time for democratic consensus and reasonable arguments, there is sophisticated chaos in Washington. Lobbyists on both sides pull all-nighters, scrambling to build coalitions and battling to be heard. Memos, briefs, petitions patter like rain onto legislators’ desks, including the critical one of Marsha Blackburn, the vacillating crux who might be willing to vote against her fellow Republican Ted Cruz’s proposition. 

Senator Blackburn ran on a platform of improving children’s online safety. That’s her issue, and she worries that this blanket ban on state-level tech regulation will make that promise impossible. Cruz, wily as ever, offers a compromise: the ban will be reduced to 5 years, with certain provisions to placate Blackburn’s concerns. She is on the brink of accepting it, and if she does, the moratorium will surely pass. 

However, the other side still has cards to play. Opponents of the moratorium gather more than a hundred signatures from children’s online safety groups, urging Blackburn to vote against the legislation. Then, LawAI conducts a thorough review of Cruz’s proposed compromise and realizes that it will not protect children as Blackburn had believed. Their rapidly-drafted analysis appears on Blackburn’s desk, telling her that she’s been duped. The careful maneuvering of coalitions and the trusted and timely analysis of experts placed too much pressure on Blackburn's campaign promises. She turns and Cruz’s moratorium loses the critical support it needed. 

So much of AI policy is doing research, drafting articles and documents, perhaps trying to get our ideas in front of policymakers, and that’s extremely valuable work. But we forget that we are not the only player on the board. OpenAI and Meta’s lobbyists are very very good. They have access to a lot of resources, are extremely comfortable navigating DC, and are intelligent, interesting, charismatic individuals with their own ideas about how the world should be run. 

In ecology, organisms must compete to occupy niches with limited resources. Policy can be a positive-sum game but a policymaker’s attention is zero-sum, and savvy operators need to get their ideas socialized at all costs when the opportunity arises. In the rare situations legislators do have the energy and will for dialogue, we need to get our ideas in as effectively as possible, soaking up the resources of the niche. 

 

Implications

I want to qualify these takes: I am a college student. I’m not some expert just because I’ve talked to some staffers. And probably, outside of the US these ideas have less weight. But I’d still like to provide some intuitions that have been building for me:

Overton window shifts matter, but aren't sufficient. Even if 60% of voters suddenly started talking AI safety, I wouldn’t be super optimistic about legislation passing soon. The silver lining is that right now we certainly don’t have anywhere near 60% public interest, and we’ve still had mild political momentum! Ultimately, passing legislation is pretty divorced from public opinion, for better or worse.

AI governance needs more pragmatic people with tacit political knowledge. We need people comfortable with policymaker engagement who understand politics as relationship/coalition ecosystems, while also capable of deep research and quality argumentation. Exemplars of our own, who can win and attain meaningful compromise, not just write papers.

We need better pipelines for giving early-career governance people these skills and experiences. (I'm kind of working on this.)

Purity about x-risk is counterproductive. Contributing expertise to (and I hate this term) "prosaic" issues—misinformation, online safety, deepfakes—builds trust on topics politicians already value. Lower x-risk impact, but still beneficial and politically strategic. Responsible innovation is a more compelling umbrella than x-risk doomers, allowing us to make AI regulation as positive-sum as possible.

 

 

TLDR: Getting even well-credentialed students and researchers to directly engage with legislators can be fruitless without a history of trust and a tacit knowledge of politics. In Silicon Valley, a compelling technical argument can secure funding and talent. In Washington, a compelling argument is table stakes. What matters is who delivers it, what relationships and trust has been built, when, to whom, and with what coalition backing them up.

 

No, good ideas aren’t enough.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI政策 AI安全 政治游说 技术论证 政策制定
相关文章