Newsroom Anthropic 03月20日
Anthropic’s Response to Governor Newsom’s AI Working Group Draft Report
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州州长AI前沿模型工作组发布报告草案,强调客观标准、循证政策指导和透明度的重要性。报告建议政府在提高AI公司安全和保障措施透明度方面发挥建设性作用,例如公开安全政策和测试文档。Anthropic等公司已实践部分建议,公开评估模型风险和测试结果。报告还指出需关注AI的经济影响。各方应共同努力,建立更透明的AI安全保障协议政策体系,应对AI系统快速发展带来的挑战。

🔑 加州州长AI前沿模型工作组的报告草案强调了AI政策环境中的客观标准、循证政策指导,以及透明度的重要性,旨在促进良性竞争,增加消费者信任。

🛡️ 报告建议,前沿AI公司应公开其安全和保障政策,并记录其进行的测试,即使这些政策完全由公司自行选择,以提高AI安全实践的透明度。

📊 Anthropic等公司已在实践中采纳了部分建议,例如公开其负责任的扩展政策,描述如何评估模型的滥用和自主风险,以及触发安全措施的阈值,并公开安全测试结果。

💰 报告还强调,学术界、民间团体和产业界需要更加关注AI的经济影响,Anthropic正通过其经济指数为此做出贡献。

This week, the California Governor’s Working Group on AI Frontier Models released its draft report. We agree with the working group’s focus on the need for objective standards and evidence-based policy guidance, and especially its emphasis on transparency as a means to create a well functioning AI policy environment.

When done thoughtfully, transparency can be a low-cost, high-impact means of growing the evidence base around a new technology, increasing consumer trust, and causing companies to enter into positive-sum competitions with one another. We welcome greater discussion of how frontier labs should be transparent about their AI development practices and were glad to see the working group emphasize this - in particular, we appreciated the focus on the need for labs to disclose how they secure their models from theft, and how they test their models for potential national security risks.

Many of the report’s recommendations already reflect industry best practices which Anthropic adheres to: for example Anthropic’s Responsible Scaling Policy publicly lays out how we assess our models for misuse and autonomy risks and thresholds that trigger increased safety and security measures for us. We also publicly describe the results of our safety and security testing as part of each major model release, and perform third-party testing to augment our own internal tests. Many other frontier AI companies have similar practices.

In line with the report’s findings, we believe governments could play a constructive role in improving transparency in the safety and security practices of frontier AI companies. At present frontier AI companies are not required to have a safety and security policy (even one entirely of their choice), nor to describe it publicly, nor to publicly document the tests they run – and therefore not all companies do. We believe this could be done in a light-touch way that does not impede innovation. As we wrote in our recent policy submission to the White House, we believe powerful AI systems will arrive soon - perhaps as early as the end of 2026 - so it is important we all devote effort to building a policy regime that creates greater transparency about the safety and security protocols of how AI systems are built.

The Working Group has also highlighted areas where academia, civil society, and industry will need to apply more focus in the coming years - particularly on the economic impacts of AI, where Anthropic is today trying to contribute via our Economic Index. We look forward to providing further feedback to the working group to aid and inform the work of finalizing the report. We commend the Governor for his foresight in kicking off this conversation, and we look forward to helping shape California’s approach to frontier model safety.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI前沿模型 透明度 安全保障 政策建议
相关文章