少点错误 07月16日 03:02
AISN #59: EU Publishes General-Purpose AI Code of Practice
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

欧盟于2025年7月10日发布了通用人工智能行为准则,旨在为AI提供者提供自愿性指南,以应对即将于8月2日生效的AI法案中关于通用人工智能的要求。该准则涵盖了透明度、版权和安全三个章节,重点关注系统性风险(如CBRN风险、失控风险、网络攻击能力和有害操纵等),要求企业建立风险识别和缓解框架,包括风险识别、分析和确定三个步骤,并强调持续监控和事件报告。OpenAI和Mistral已表示将遵守该准则。同时,Meta以143亿美元收购Scale AI 49%股份,成立Meta超级智能实验室,并大幅提高AI人才薪酬,引发行业竞争。

📜欧盟通用人工智能行为准则为AI提供者提供自愿性指南,旨在应对AI法案中关于通用人工智能的要求,涵盖透明度、版权和安全三个章节。

⚠️准则重点关注系统性风险,包括CBRN风险、失控风险、网络攻击能力和有害操纵等,要求企业建立风险识别和缓解框架。

🔍风险识别和缓解框架包括风险识别、分析和确定三个步骤,并强调持续监控和事件报告,以应对AI模型的潜在风险。

🏢OpenAI和Mistral已表示将遵守该准则,这表明行业主要玩家倾向于通过遵循准则来确保合规性。

💰Meta以143亿美元收购Scale AI 49%股份,成立Meta超级智能实验室,并大幅提高AI人才薪酬,引发行业竞争和超级智能领域的激烈争夺。

Published on July 15, 2025 6:59 PM GMT

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.

EU Publishes General-Purpose AI Code of Practice

In June 2024, the EU adopted the AI Act, which remains the world’s most significant law regulating AI systems. The Act bans some uses of AI like social scoring and predictive policing and limits other “high risk” uses such as generating credit scores or evaluating educational outcomes. It also regulates general-purpose AI (GPAI) systems, imposing transparency requirements, copyright protection policies, and safety and security standards for models that pose systemic risk (defined as those trained using ≥1025 FLOPs).

However, these safety and security standards are ambiguous—for example, the Act requires providers of GPAIs to “assess and mitigate possible systemic risks,” but does not specify how to do so. This ambiguity may leave GPAI developers uncertain whether they are complying with the AI Act, and regulators uncertain whether GPAI developers are implementing adequate safety and security practices.

To address this problem, on July 10th 2025, the EU published the General-Purpose AI Code of Practice. The Code is a voluntary set of guidelines to comply with the AI Act’s GPAI obligations before they take effect on August 2nd, 2025.

The Code of Practice establishes safety and security requirements for GPAI providers. The Code consists of three chapters—Transparency, Copyright, and Safety and Security. The last chapter, Safety and Security, only applies to the handful of companies whose models cross the Act’s systemic-risk threshold.

The Safety and Security chapter requires GPAI providers to create frameworks outlining how they will identify and mitigate risks throughout a model's lifecycle. These frameworks must follow a structured approach to risk assessment—for each major decision (such as new model releases), providers must follow the following three steps:

Continuous monitoring, incident reporting timelines, and future-proofing. The Code requires continuous monitoring after models are deployed, and strict incident reporting timelines. For serious incidents, companies must file initial reports within days. It also acknowledges that current safety methods may prove insufficient as AI advances. Companies can implement alternative approaches if they demonstrate equal or superior safety outcomes.

AI providers will likely comply with the Code. While the Code is technically voluntary, compliance with the EU AI Act is not. Providers are incentivized to reduce their legal uncertainty by complying with the Code, since EU regulators will assume that providers who comply with the Code are also Act-compliant. OpenAI and Mistral have already indicated they intend to comply with the Code.

The Code formalizes some existing industry practices advocated for by parts of the AI safety community, such as publishing safety frameworks (or: responsible scaling policies) and system cards. Since frontier AI companies are very likely to comply with the Code, securing similar legislation in the US may no longer be a priority for AI safety.

Meta Superintelligence Labs

Meta spent $14.3 billion for a 49 percent stake in Scale AI, starting “Meta Superintelligence Labs.” The deal folds every AI group at Meta into one division and puts Scale founder Alexandr Wang—now chief AI officer—to lead Meta’s superintelligence development efforts.

Meta makes nine-figure pay offers to poach top AI talent. Reuters reported that Meta has offered “up to $100 million” to OpenAI staff, a tactic CEO Sam Altman criticized. SemiAnalysis estimates Meta is offering typical leadership packages of around $200 million over four years. For example, Bloomberg reports that Apple’s foundation-models chief Ruoming Pang left for Meta after a package “well north of $200 million.” Other early recruits span OpenAI, DeepMind, and Anthropic.

Meta has created a resourced competitor in the superintelligence race. In response to Meta’s hiring efforts, OpenAI, Google, and Anthropic have already raised pay bands, and smaller labs might be priced out of frontier work.

Meta is also raising its compute expenditures. It lifted its 2025 capital-expenditure forecast to $72 billin, and SemiAnalysis describes new, temporary “tent” campuses that can house one-gigawatt GPU clusters.

In Other News

Government

Industry

Civil Society

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

欧盟 通用人工智能 AI安全 Meta Scale AI
相关文章