The Verge - Artificial Intelligences 2024年08月22日
OpenAI exec says California’s AI safety bill might slow progress
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

OpenAI 首席战略官 Jason Kwon 在一封公开信中表达了对加州 AI 法案 SB 1047 的反对意见,认为该法案可能会阻碍 AI 发展,并呼吁联邦政府主导 AI 监管,以促进创新并制定全球标准。Kwon 指出,SB 1047 的要求可能导致企业离开加州,并对 AI 创新造成负面影响。

🤖 OpenAI 在一封公开信中表达了对加州 AI 法案 SB 1047 的反对意见,认为该法案可能会阻碍 AI 发展。该法案旨在为强大的 AI 模型制定标准,要求进行安全测试和其他保障措施,并赋予加州总检察长采取法律行动的权力。

💬 OpenAI 认为,SB 1047 的要求过于严格,可能会导致企业离开加州,并对 AI 创新造成负面影响。他们认为,联邦政府主导 AI 监管,制定统一的标准,更有利于促进 AI 发展。

🤝 OpenAI 的立场得到了其他 AI 实验室、开发者、专家和加州国会代表的支持。他们认为,联邦政府的监管能够更好地促进 AI 创新,并制定全球标准。

📝 加州州议员 Scott Wiener 反驳了 OpenAI 的观点,认为 SB 1047 是一个合理的法案,要求大型 AI 实验室进行安全测试,并指出该法案适用于所有在加州开展业务的公司,无论其总部是否位于加州。

Image: The Verge

In a new letter, OpenAI chief strategy officer Jason Kwon insists that AI regulations should be left to the federal government. As reported previously by Bloomberg, Kwon says that a new AI safety bill under consideration in California could slow progress and cause companies to leave the state.

A federally-driven set of AI policies, rather than a patchwork of state laws, will foster innovation and position the U.S. to lead the development of global standards. As a result, we join other AI labs, developers, experts and members of California’s Congressional delegation in respectfully opposing SB 1047 and welcome the opportunity to outline some of our key concerns.

The letter is addressed to California State Senator Scott Wiener, who originally introduced SB 1047, also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.

According to proponents like Wiener, it establishes standards ahead of the development of more powerful AI models, requires precautions like pre-deployment safety testing and other safeguards, adds whistleblower protections for employees of AI labs, gives California’s Attorney General power to take legal action if AI models cause harm, and calls for establishing a “public cloud computer cluster” called CalCompute.

In a response to the letter published Wednesday evening, Wiener points out that the proposed requirements apply to any company doing business in California, whether they are headquartered in the state or not, so the argument “makes no sense.” He also writes that OpenAI “...doesn’t criticize a single provision of the bill” and closes by saying, “SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing, namely, test their large models for catastrophic safety risk.”

Following concerns from politicians like Zoe Lofgren and Nancy Pelosi, companies like Anthropic, and organizations such as California’s Chamber of Commerce, the bill passed out of committee with a number of amendments that included tweaks like replacing criminal penalties for perjury with civil penalties and narrowing pre-harm enforcement abilities for the Attorney General.

The bill is currently awaiting its final vote before going to Governor Gavin Newsom’s desk.

Here is OpenAI’s letter in full:

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI AI 监管 SB 1047 加州 联邦政府
相关文章