Newsroom Anthropic 11小时前
Anthropic to sign the EU Code of Practice
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

Anthropic公司宣布签署欧盟的通用人工智能(AI)行为准则,该准则强调透明度、安全性和责任感,与Anthropic长期以来对前沿AI发展的倡导不谋而合。文章指出,若欧盟AI法案和行为准则得到审慎实施,将有助于欧洲经济在2030年中期实现万亿欧元级别的增长。准则与欧洲AI大陆行动计划协同作用,展示了灵活的安全标准如何在保持创新的同时,促进AI的广泛部署。Anthropic相信,清晰的风险评估流程能加速解决欧洲面临的紧迫挑战,如推进科研、优化公共服务和提升产业竞争力。文章还强调了透明度在AI安全方面的重要性,以及AI技术快速迭代对政策灵活性的要求,并期待与欧盟AI办公室及安全组织合作,确保准则的有效性和适应性。

✅ Anthropic签署欧盟通用AI行为准则,旨在推动AI发展的透明度、安全性和责任感。该准则与Anthropic自身的负责任扩展政策(Responsible Scaling Policy)相契合,并要求建立强制性的安全框架,以评估和缓解系统性风险,包括化学、生物、放射和核(CBRN)武器带来的灾难性风险。

📈 欧盟AI法案和行为准则的实施,预计将为欧洲经济注入活力,到2030年中期每年可能贡献超过万亿欧元的经济增量。这表明灵活的安全标准能够平衡创新与AI的广泛应用,助力欧洲在AI这一变革性技术领域保持竞争力。

🚀 文章列举了AI带来的具体应用实例,如Novo Nordisk在药物研发上的突破、Legora在法律工作中的效率提升,以及欧洲议会扩大公民获取历史档案的访问权限。这些案例凸显了AI的巨大潜力,并强调了在享受AI益处的同时,确保公众对AI安全和透明度实践的可见性,同时保留私营部门的敏捷性以实现AI的变革潜力。

🔄 AI技术的快速发展和不断变化,要求政策保持灵活性和适应性。Anthropic在过去两年中已多次调整其负责任扩展政策,以适应实际实施中的洞察,例如更清晰地界定哪些参与者属于ASL-3安全标准的范围,这是基于对相关威胁模型和模型能力的深入理解。

🤝 行业组织(如Frontier Model Forum)在制定共同安全实践和评估标准方面发挥着关键作用,它们在行业和政府之间架起桥梁,将技术洞察转化为可操作的政策。Anthropic承诺与欧盟AI办公室及安全组织合作,确保行为准则既稳健又适应新兴技术,这种协作模式对欧洲充分利用AI优势并在全球舞台上有效竞争至关重要。

After review, Anthropic intends to sign the European Union's General-Purpose AI Code of Practice. We believe the Code advances the principles of transparency, safety and accountability—values that have long been championed by Anthropic for frontier AI development. If thoughtfully implemented, the EU AI Act and Code will enable Europe to harness the most significant technology of our time to power innovation and competitiveness.

A recent analysis found that AI has the potential to add more than a trillion euros per year to the EU economy by the mid-2030s. The Code, working alongside Europe's AI Continent Action Plan, demonstrates how flexible safety standards can both preserve innovation and enable broader AI deployment. This approach highlights the opportunities and imperatives required for Europe to remain competitive in this transformational technology. With transparent risk assessment processes in place, we can accelerate work to address Europe's most pressing challenges: advancing scientific research, improving public services, and enhancing industrial competitiveness.

We're already seeing signs of what's possible, from Novo Nordisk accelerating breakthrough drug discovery, to Legora transforming legal work, to the European Parliament expanding access to decades of archives to citizens. Ensuring these benefits materialize with minimal downside requires public visibility into AI safety and transparency practices while preserving private sector agility to deliver AI's transformative potential.

Building on our commitment to transparency

As outlined previously, Anthropic believes the frontier AI industry needs robust transparency frameworks that hold companies accountable for documenting how they identify, assess, and mitigate risks. The EU Code establishes this baseline through mandatory Safety and Security Frameworks that build upon Anthropic’s own Responsible Scaling Policy and will describe important processes for assessing and mitigating systemic risks. This includes assessment of catastrophic risks—particularly those from Chemical, Biological, Radiological, and Nuclear (CBRN) weapons.

Maintaining flexibility

AI moves fast and changes constantly, which means the best policies are those that can be flexible and adapt alongside the technology.

Over the nearly two years since we first published our Responsible Scaling Policy, we've refined it several times based on practical insights from implementation. For example, our most recent update clarified which actors are in-scope for the ASL-3 Security Standard; this determination was based on a deeper understanding of the relevant threat models and model capabilities.

As an industry, we're still developing best practices for assessing the systemic risks identified in the Code. Different risks require different methodologies. Third-party organizations like the Frontier Model Forum play a critical role, establishing common safety practices and evaluation standards that evolve with the technology. These groups bridge industry and government, translating technical insights into actionable policy.

We're committed to working with the EU AI Office and safety organizations to ensure the Code remains both robust and responsive to emerging technologies. This collaborative approach—combining regulatory frameworks with flexibility—will be essential for Europe to harness AI's benefits while competing effectively on the global stage.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

Anthropic 欧盟AI行为准则 人工智能安全 AI创新 透明度
相关文章