The Verge - Artificial Intelligences 2024年08月29日
California State Assembly passes sweeping AI safety bill
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州州议会通过了《安全和安全的前沿人工智能模型创新法案》(SB 1047),这是美国首批针对人工智能的重要监管措施之一。该法案要求在加州运营的AI公司在训练大型基础模型之前采取一系列安全措施,包括确保模型能够快速安全地关闭、防止模型在训练后被恶意修改,并进行测试以评估模型是否会造成重大危害。该法案旨在确保人工智能技术在发展的同时,能够安全可靠地应用于社会。

🤖 该法案要求在加州运营的AI公司在训练大型基础模型之前采取一系列安全措施,包括确保模型能够快速安全地关闭、防止模型在训练后被恶意修改,并进行测试以评估模型是否会造成重大危害。

💪 该法案旨在确保人工智能技术在发展的同时,能够安全可靠地应用于社会。

⚠️ 该法案的通过引发了业界和社会各界的热烈讨论,一些AI公司和组织认为该法案过于关注灾难性危害,可能会对小型开源AI开发者造成不利影响。

🤝 该法案经过修改,将潜在的刑事处罚改为民事处罚,缩减了加州总检察长赋予的执法权,并调整了加入由该法案创建的“前沿模型委员会”的要求。

🗳️ 该法案已通过加州州议会投票,接下来将提交给加州州长Gavin Newsom,州长将在9月底前决定该法案的命运。

Illustration by Cath Virginia / The Verge | Photos from Getty Images

The California State Assembly has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), Reuters reports. The bill is one of the first significant regulations of artificial intelligence in the US.

The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a number of precautions before they train a sophisticated foundation model. Those include making it possible to quickly and fully shut the model down, ensuring the model is protected against “unsafe post-training modifications,” and maintaining a testing procedure to evaluate whether a model or its derivatives is especially at risk of “causing or enabling a critical harm.”

Senator Scott Wiener, the bill’s main author, said SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing: test their large models for catastrophic safety risk. “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about forseeable AI risks, and it deserves to be enacted.”

Critics of SB 1047 — including OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and California’s Chamber of Commerce — have argued that it’s overly focused on catastrophic harms and could unduly harm small, open-source AI developers. The bill was amended in response, replacing potential criminal penalties with civil ones, narrowing enforcement powers granted to California’s attorney general, and adjusting requirements to join a “Board of Frontier Models” created by the bill.

After the State Senate votes on the amended bill — a vote that’s expected to pass — the AI safety bill will head to Governor Gavin Newsom, who will have until the end of September to decide its fate, according to The New York Times.

Anthropic declined to comment beyond pointing to a letter sent by Anthropic CEO Dario Amodei to Governor Newsom last week. OpenAI didn’t immediately respond to a request for comment.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI安全 加州 SB 1047 监管
相关文章