TechCrunch News 前天 04:56
California lawmaker behind SB 1047 reignites push for mandated AI safety reports
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州州参议员Scott Wiener提出SB 53修正案,旨在增强对大型AI公司的监管,强制其公布安全协议并在发生安全事件时发布报告。该法案若获通过,加州将成为首个对领先AI开发商施加实质性透明度要求的州,可能影响OpenAI、Google等公司。SB 53在平衡行业发展与安全监管之间寻求平衡,并受到加州AI政策小组的建议影响。此外,该法案还包括举报人保护措施,并计划建立公共云计算集群以支持AI初创企业。纽约州也在考虑类似的AI安全法案。尽管联邦层面尚未出台相关法规,但加州正在积极推进,以确保AI的安全发展。

🧐 SB 53的核心在于强制大型AI公司公开安全协议,并在发生安全事件时发布报告,以提高AI行业的透明度。

⚖️ 该法案旨在平衡AI行业快速发展与安全监管之间的关系,力求在不阻碍行业发展的前提下,建立对AI公司的有效监管。

🛡️ SB 53还包含举报人保护条款,保护那些认为公司AI技术构成“关键风险”的员工,并计划建立公共云计算集群以支持AI初创企业。

🌍 加州并非孤军奋战,纽约州也在考虑类似的AI安全法案,反映出各州对AI安全监管的重视,以及在联邦层面行动滞后情况下的积极作为。

🤔 尽管AI公司通常会发布安全报告,但近年来发布频率和及时性有所下降。SB 53旨在促使AI公司披露更多信息,以应对潜在风险。

California State Senator Scott Wiener on Wednesday introduced new amendments to his latest bill, SB 53, that would require the world’s largest AI companies to publish safety and security protocols and issue reports when safety incidents occur.

If signed into law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including OpenAI, Google, Anthropic, and xAI.

Senator Wiener’s previous AI bill, SB 1047, included similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California’s Governor then called for a group of AI leaders — including the leading Stanford researcher co-founder of World Labs Fei Fei Li — to form a policy group and set goals for the state’s AI safety efforts.

California’s AI policy group recently published their final recommendations, citing a need for “requirements on industry to publish information about their systems” in order to establish a “robust and transparent evidence environment.” Senator Wiener’s office said in a press release that SB 53’s amendments were heavily influenced by this report.

“The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be,” Senator Wiener said in the release.

SB 53 aims to strike a balance that Governor Newsom claimed SB 1047 failed to achieve — ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of California’s AI industry.

“These are concerns that my organization and others have been talking about for a while,” said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group, Encode, in an interview with TechCrunch. “Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take.”

Techcrunch event

Boston, MA | July 15

The bill also creates whistleblower protections for employees of AI labs who believe their company’s technology poses a “critical risk” to society — defined in the bill as contributing to the death or injury of more than 100 people, or more than $1 billion in damage.

Additionally, the bill aims to create CalCompute, a public cloud computing cluster to support startups and researchers developing large-scale AI.

With the new amendments, SB 53 is now headed to California State Assembly Committee on Privacy and Consumer Protection for approval. Should it pass there, the bill will also need to pass through several other legislative bodies before reaching Governor Newsom’s desk.

On the other side of the U.S., New York Governor Kathy Hochul is now considering a similar AI safety bill, the RAISE Act, which would also require large AI developers to publish safety and security reports.

The fate of state AI laws like the RAISE Act and SB 53 were briefly in jeopardy as federal lawmakers considered a 10-year AI moratorium on state AI regulation — an attempt to limit a “patchwork” of AI laws that companies would have to navigate. However, that proposal failed in a 99-1 Senate vote earlier in July.

“Ensuring AI is developed safely should not be controversial — it should be foundational,” said Geoff Ralston, the former president of Y Combinator, in a statement to TechCrunch. “Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California’s SB 53 is a thoughtful, well-structured example of state leadership.”

Up to this point, lawmakers have failed to get AI companies on board with state-mandated transparency requirements. Anthropic has broadly endorsed the need for increased transparency into AI companies, and even expressed modest optimism about the recommendations from California’s AI policy group. But companies such as OpenAI, Google, and Meta have been more resistant to these efforts.

Leading AI model developers typically publish safety reports for their AI models, but they’ve been less consistent in recent months. Google, for example, decided not to publish a safety report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. OpenAI also decided not to publish a safety report for its GPT-4.1 model. Later, a third-party study came out that suggested it may be less aligned than previous AI models.

SB 53 represents a toned-down version of previous AI safety bills, but it still could force AI companies to publish more information than they do today. For now, they’ll be watching closely as Senator Wiener once again tests those boundaries.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI安全 加州立法 透明度 AI监管 SB 53
相关文章