少点错误 前天 02:28
AISN #60: The AI Action Plan
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本期AI安全通讯聚焦欧盟新规与科技巨头的战略动向。欧盟出台了旨在规范通用人工智能(GPAI)提供商的《通用目的AI行为准则》,为AI Act的实施提供具体指导,强调透明度、版权保护及安全风险的识别与缓解。与此同时,Meta斥巨资重塑其超级智能研发体系,通过收购Scale AI并整合内部AI团队,同时以高薪吸引顶尖AI人才,加剧了AI领域的竞争格局,并大幅提升了计算资源投入。此外,文章还简要提及了其他国家和地区的AI监管动态及行业事件,如美国加州的AI安全法案扩展、俄乌冲突对AI供应链的影响,以及AI工具对开发者效率的潜在影响。

🌍 欧盟《通用目的AI行为准则》的出台,为AI Act的实施提供了详细的自愿性指南,尤其针对通用人工智能(GPAI)的透明度、版权保护以及安全风险管理。该准则要求GPAI提供商建立风险识别与缓解框架,并规定了从风险识别、分析到确定的结构化流程,尤其关注CBRN风险、失控、网络攻击能力和有害操纵等四类系统性风险,旨在规范AI发展,降低潜在风险。

🚀 Meta正以巨额投资加速其超级智能研发步伐,通过143亿美元收购Scale AI49%的股份,成立“Meta超级智能实验室”,并将公司内部所有AI团队整合。Meta还通过提供高达1亿美元的优厚薪酬方案,积极招募来自OpenAI、DeepMind等顶尖机构的AI人才,此举已引发行业内其他主要AI公司的薪酬竞争,并可能导致小型AI研究机构在人才争夺中处于不利地位。

💡 Meta大幅提升其计算能力投入,将2025年资本支出预期上调至720亿美元,并计划建设能容纳1吉瓦GPU集群的“帐篷”式临时数据中心。这表明Meta正全力以赴构建强大的AI基础设施,以支撑其在超级智能领域的长期发展和竞争,同时也可能对AI芯片供应商和能源市场产生影响。

⚖️ 在AI监管与行业动态方面,文章还提及了其他重要进展。例如,美国加州参议员Scott Wiener扩展了SB 53号AI安全法案,增加了新的透明度措施;美国商务部寻求为工业和安全局(BIS)提供更多资金以加强出口管制执法;密苏里州总检察长正在调查AI聊天机器人是否存在政治偏见;BRICS国家签署了包含AI风险缓解承诺的协议。此外,文章还指出,AI工具可能会降低经验丰富的开发者的工作效率,并提及了Grok的负面表现以及xAI发布Grok 4的进展。

Published on July 31, 2025 6:20 PM GMT

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

In this edition: The EU published a General-Purpose AI Code of Practice for AI providers, and Meta is spending billions revamping its superintelligence development efforts.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.


EU Publishes General-Purpose AI Code of Practice

In June 2024, the EU adopted the AI Act, which remains the world’s most significant law regulating AI systems. The Act bans some uses of AI like social scoring and predictive policing and limits other “high risk” uses such as generating credit scores or evaluating educational outcomes. It also regulates general-purpose AI (GPAI) systems, imposing transparency requirements, copyright protection policies, and safety and security standards for models that pose systemic risk (defined as those trained using ≥1025 FLOPs).

However, these safety and security standards are ambiguous—for example, the Act requires providers of GPAIs to “assess and mitigate possible systemic risks,” but does not specify how to do so. This ambiguity may leave GPAI developers uncertain whether they are complying with the AI Act, and regulators uncertain whether GPAI developers are implementing adequate safety and security practices.

To address this problem, on July 10th 2025, the EU published the General-Purpose AI Code of Practice. The Code is a voluntary set of guidelines to comply with the AI Act’s GPAI obligations before they take effect on August 2nd, 2025.

The Code of Practice establishes safety and security requirements for GPAI providers. The Code consists of three chapters—Transparency, Copyright, and Safety and Security. The last chapter, Safety and Security, only applies to the handful of companies whose models cross the Act’s systemic-risk threshold.

The Safety and Security chapter requires GPAI providers to create frameworks outlining how they will identify and mitigate risks throughout a model's lifecycle. These frameworks must follow a structured approach to risk assessment—for each major decision (such as new model releases), providers must follow the following three steps:

Continuous monitoring, incident reporting timelines, and future-proofing. The Code requires continuous monitoring after models are deployed, and strict incident reporting timelines. For serious incidents, companies must file initial reports within days. It also acknowledges that current safety methods may prove insufficient as AI advances. Companies can implement alternative approaches if they demonstrate equal or superior safety outcomes.

AI providers will likely comply with the Code. While the Code is technically voluntary, compliance with the EU AI Act is not. Providers are incentivized to reduce their legal uncertainty by complying with the Code, since EU regulators will assume that providers who comply with the Code are also Act-compliant. OpenAI and Mistral have already indicated they intend to comply with the Code.

The Code formalizes some existing industry practices advocated for by parts of the AI safety community, such as publishing safety frameworks (or: responsible scaling policies) and system cards. Since frontier AI companies are very likely to comply with the Code, securing similar legislation in the US may no longer be a priority for AI safety.

Meta Superintelligence Labs

Meta spent $14.3 billion for a 49 percent stake in Scale AI, starting “Meta Superintelligence Labs.” The deal folds every AI group at Meta into one division and puts Scale founder Alexandr Wang—now chief AI officer—to lead Meta’s superintelligence development efforts.

Meta makes nine-figure pay offers to poach top AI talent. Reuters reported that Meta has offered “up to $100 million” to OpenAI staff, a tactic CEO Sam Altman criticized. SemiAnalysis estimates Meta is offering typical leadership packages of around $200 million over four years. For example, Bloomberg reports that Apple’s foundation-models chief Ruoming Pang left for Meta after a package “well north of $200 million.” Other early recruits span OpenAI, DeepMind, and Anthropic.

Meta has created a resourced competitor in the superintelligence race. In response to Meta’s hiring efforts, OpenAI, Google, and Anthropic have already raised pay bands, and smaller labs might be priced out of frontier work.

Meta is also raising its compute expenditures. It lifted its 2025 capital-expenditure forecast to $72 billin, and SemiAnalysis describes new, temporary “tent” campuses that can house one-gigawatt GPU clusters.

In Other News

Government

Industry

Civil Society

See also: CAIS’ X account, our paper on superintelligence strategy, our AI safety course, and AI Frontiers, a new platform for expert commentary and analysis.

Listen to the AI Safety Newsletter for free on Spotify or Apple Podcasts.

Subscribe to receive future versions.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 欧盟AI法案 通用AI Meta 超级智能 AI监管
相关文章