TechCrunch News 03月04日
The author of SB 1047 introduces a new AI bill in California
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州州参议员Scott Wiener提出新AI法案SB 53,旨在保护AI实验室员工,允许其在认为公司AI系统存在社会‘关键风险’时发声,并创建公共云计算集群CalCompute。该法案引发诸多讨论,其部分内容取自此前有争议的SB 1047。

🌐SB 53保护AI实验室员工,使其能指出公司AI系统的‘关键风险’。

💻创建公共云计算集群CalCompute,为研究和创业提供资源。

📋规定法案适用对象及相关要求,限制前沿AI模型开发者的报复行为。

🎯设立小组构建CalCompute,确定其建设、规模及用户组织的访问权限。

The author of California’s SB 1047, the nation’s most controversial AI safety bill of 2024, is back with a new AI bill that could shake up Silicon Valley.

California state Senator Scott Wiener introduced a new bill on Friday that would protect employees at leading AI labs, allowing them to speak out if they think their company’s AI systems could be a “critical risk” to society. The new bill, SB 53, would also create a public cloud computing cluster, called CalCompute, to give researchers and startups the necessary computing resources to develop AI that benefits the public.

Wiener’s last AI bill, California’s SB 1047, sparked a lively debate across the country around how to handle massive AI systems that could cause disasters. SB 1047 aimed to prevent the possibility of very large AI models creating catastrophic events, such as causing loss of life or cyberattacks costing more than $500 million in damages. However, Governor Gavin Newsom ultimately vetoed the bill in September, saying SB 1047 was not the best approach.

But the debate over SB 1047 quickly turned ugly. Some Silicon Valley leaders said SB 1047 would hurt America’s competitive edge in the global AI race, and claimed the bill was inspired by unrealistic fears that AI systems could bring about science fiction-like doomsday scenarios. Meanwhile, Senator Wiener alleged that some venture capitalists engaged in a “propaganda campaign” against his bill, pointing in part to Y Combinator’s claim that SB 1047 would send startup founders to jail, a claim experts argued was misleading.

SB 53 essentially takes the least controversial parts of SB 1047 – such as whistleblower protections and the establishment of a CalCompute cluster – and repackages them into a new AI bill.

Notably, Wiener is not shying away from existential AI risk in SB 53. The new bill specifically protects whistleblowers who believe their employers are creating AI systems that pose a “critical risk.” The bill defines critical risk as a foreseeable or material risk that a developer’s development, storage, or deployment of a foundation model, as defined, will result in the death of, or serious injury to, more than 100 people, or more than $1 billion in damage to rights in money or property.”

SB 53 limits frontier AI model developers – likely including OpenAI, Anthropic, and xAI, among others – from retaliating against employees who disclose concerning information to California’s Attorney General, federal authorities, or other employees. Under the bill, these developers would be required to report back to whistleblowers on certain internal processes the whistleblowers find concerning.

As for CalCompute, SB 53 would establish a group to build out a public cloud computing cluster. The group would consist of University of California representatives, as well as other public and private researchers. It would make recommendations for how to build CalCompute, how large the cluster should be, and which users and organizations should have access to it.

Of course, it’s very early in the legislative process for SB 53. The bill needs to be reviewed and passed by California’s legislative bodies before it reaches Governor Newsom’s desk. State lawmakers will surely be waiting for Silicon Valley’s reaction to SB 53.

However, 2025 may be a tougher year to pass AI safety bills compared to 2024. California passed 18 AI-related bills in 2024, but now it seems as if the AI doom movement has lost ground.

Vice President J.D. Vance signaled at the Paris AI Action Summit that America is not interested in AI safety, but rather prioritizes AI innovation. While the CalCompute cluster established by SB 53 could surely be seen as advancing AI progress, it’s unclear how legislative efforts around existential AI risk will fare in 2025.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

SB 53 AI安全 CalCompute 员工保护
相关文章