The Verge - Artificial Intelligences 2024年09月11日
Will California flip the AI industry on its head?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州正在尝试通过SB1047法案对人工智能进行监管,该法案旨在对大型AI模型进行严格的安全测试和认证,并要求开发者建立“终止开关”以应对潜在风险。法案引发了科技行业的强烈反对,批评者认为它会阻碍创新,而支持者则认为这是必要的安全措施。目前,该法案已通过加州议会,但最终命运将由加州州长Gavin Newsom决定。

💻 **SB1047法案旨在对大型AI模型进行严格的安全测试和认证**,要求开发者进行第三方评估,并确保模型不会对人类构成重大风险。该法案还要求开发者建立“终止开关”以关闭失控模型,并向新的监管机构报告安全事件。

📡 **科技行业对SB1047法案表示强烈反对**,认为它会阻碍创新,并可能导致美国在人工智能领域落后于中国和俄罗斯。批评者还担心该法案过于关注科幻小说中的灾难性人工智能,而忽略了现实中的风险。

📢 **支持者认为SB1047法案是必要的安全措施**,可以防止人工智能带来的潜在风险,例如深度伪造技术被用于骚扰和欺诈。他们强调,人工智能技术的发展速度很快,需要进行严格的监管以确保安全。

📣 **SB1047法案的最终命运将由加州州长Gavin Newsom决定**,他面临着来自科技行业和政界人士的巨大压力。该法案的通过将对人工智能行业产生深远的影响,也可能影响美国联邦政府对人工智能的监管政策。

📤 **SB1047法案的最终版本已经进行了大幅修改**,删除了最初版本中提出的监管机构,并降低了开发者对违反安全要求的法律责任。然而,即使是修改后的版本也引发了争议,一些人认为它仍然过于严格,而另一些人则认为它仍然不足以保护公众安全。

Image: Cath Virginia / The Verge, Getty Images

SB 1047 aims to regulate AI, and the AI industry is out to stop it.

Artificial intelligence is moving quickly. It’s now able to mimic humans convincingly enough to fuel massive phone scams or spin up nonconsensual deepfake imagery of celebrities to be used in harassment campaigns. The urgency to regulate this technology has never been more critical — so, that’s what California, home to many of AI’s biggest players, is trying to do with a bill known as SB 1047.

SB 1047, which passed the California State Assembly and Senate in late August, is now on the desk of California Governor Gavin Newsom — who will determine the fate of the bill. While the EU and some other governments have been hammering out AI regulation for years now, SB 1047 would be the strictest framework in the US so far. Critics have painted a nearly apocalyptic picture of its impact, calling it a threat to startups, open source developers, and academics. Supporters call it a necessary guardrail for a potentially dangerous technology — and a corrective to years of under-regulation. Either way, the fight in California could upend AI as we know it, and both sides are coming out in force.

AI’s power players are battling California — and each other

The original version of SB 1047 was bold and ambitious. Introduced by state Senator Scott Wiener as the California Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, it set out to tightly regulate advanced AI models with a sufficient amount of computing power, around the size of today’s largest AI systems (which is 10^26 FLOPS). The bill required developers of these frontier models to conduct thorough safety testing, including third-party evaluations, and certify that their models posed no significant risk to humanity. Developers also had to implement a “kill switch” to shut down rogue models and report safety incidents to a newly established regulatory agency. They could face potential lawsuits from the attorney general for catastrophic safety failures. If they lied about safety, developers could even face perjury charges, which include the threat of prison (however, that’s extremely rare in practice).

California’s legislators are in a uniquely powerful position to regulate AI. The country’s most populous state is home to many leading AI companies, including OpenAI, which publicly opposed the bill, and Anthropic, which was hesitant on its support before amendments. SB 1047 also seeks to regulate models that wish to operate in California’s market, giving it a far-reaching impact far beyond the state’s borders.

Unsurprisingly, significant parts of the tech industry revolted. At a Y Combinator event regarding AI regulation that I attended in late July, I spoke with Andrew Ng, cofounder of Coursera and founder of Google Brain, who talked about his plans to protest SB 1047 in the streets of San Francisco. Ng made a surprise appearance onstage later, criticizing the bill for its potential harm to academics and open source developers as Wiener looked on with his team.

“When someone trains a large language model...that’s a technology. When someone puts them into a medical device or into a social media feed or into a chatbot or uses that to generate political deepfakes or non-consensual deepfake porn, those are applications,” Ng said onstage. “And the risk of AI is not a function. It doesn’t depend on the technology — it depends on the application.”

Critics like Ng worry SB 1047 could slow progress, often invoking fears that it could impede the lead the US has against adversarial nations like China and Russia. Representatives Zoe Lofgren and Nancy Pelosi and California’s Chamber of Commerce worry that the bill is far too focused on fictional versions of catastrophic AI, and AI pioneer Fei-Fei Li warned in a Fortune column that SB 1047 would “harm our budding AI ecosystem.” That’s also a pressure point for Khan, who’s concerned about federal regulation stifling the innovation in open-source AI communities.

Onstage at the YC event, Khan emphasized that open source is a proven driver of innovation, attracting hundreds of billions in venture capital to fuel startups. “We’re thinking about what open source should mean in the context of AI, both for you all as innovators but also for us as law enforcers,” Khan said. “The definition of open source in the context of software does not neatly translate into the context of AI.” Both innovators and regulators, she said, are still navigating how to define, and protect, open-source AI in the context of regulation.

A weakened SB 1047 is better than nothing

The result of the criticism was a significantly softer second draft of SB 1047, which passed out of committee on August 15th. In the new SB 1047, the proposed regulatory agency has been removed, and the attorney general can no longer sue developers for major safety incidents. Instead of submitting safety certifications under the threat of perjury, developers now only need to provide public “statements” about their safety practices, with no criminal liability. Additionally, entities spending less than $10 million on fine-tuning a model are not considered developers under the bill, offering protection to small startups and open source developers.

Still, that doesn’t mean the bill isn’t worth passing, according to supporters. Even in its weakened form, if SB 1047 “causes even one AI company to think through its actions, or to take the alignment of AI models to human values more seriously, it will be to the good,” wrote Gary Marcus, emeritus professor of psychology and neural science at NYU. It will still offer critical safety protections and whistleblower shields, which some may argue is better than nothing.

Anthropic CEO Dario Amodei said the bill was “substantially improved, to the point where we believe its benefits likely outweigh its costs” after the amendments. In a statement in support of SB 1047 reported by Axios, 120 current and former employees of OpenAI, Anthropic, Google’s DeepMind, and Meta said they “believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.”

“It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks,” the statement said.

Meanwhile, many detractors haven’t changed their position. “The edits are window dressing,” Andreessen Horowitz general partner Martin Casado posted. “They don’t address the real issues or criticisms of the bill.”

There’s also OpenAI’s chief strategy officer, Jason Kwon, who said in a letter to Newsom and Wiener that “SB 1047 would threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state in search of greater opportunity elsewhere.”

“Given those risks, we must protect America’s AI edge with a set of federal policies — rather than state ones — that can provide clarity and certainty for AI labs and developers while also preserving public safety,” Kwon wrote.

Newsom’s political tightrope

Though this highly amended version of SB 1047 has made it to Newsom’s desk, he’s been noticeably quiet about it. It’s not exactly news that regulating technology has always involved a degree of political maneuvering and that much is being signaled by Newsom’s tight-lipped approach on such controversial regulation. Newsom may not want to rock the boat with technologists just ahead of a presidential election.

Many influential tech executives are also major donors to political campaigns, and in California, home to some of the world’s largest tech companies, these executives are deeply connected to the state’s politics. Venture capital firm Andreessen Horowitz has even enlisted Jason Kinney, a close friend of Governor Newsom and a Democratic operative, to lobby against the bill. For a politician, pushing for tech regulation could mean losing millions in campaign contributions. For someone like Newsom, who has clear presidential ambitions, that’s a level of support he can’t afford to jeopardize.

What’s more, the rift between Silicon Valley and Democrats has grown, especially after Andreessen Horowitz’s cofounders voiced support for Donald Trump. The firm’s strong opposition to SB 1047 means if Newsom signs it into law, the divide could widen, making it harder for Democrats to regain Silicon Valley’s backing.

So, it comes down to Newsom, who’s under intense pressure from the world’s most powerful tech companies and fellow politicians like Pelosi. While lawmakers have been working to strike a delicate balance between regulation and innovation for decades, AI is nebulous and unprecedented, and a lot of the old rules don’t seem to apply. For now, Newsom has until September to make a decision that could upend the AI industry as we know it.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

人工智能 AI监管 SB1047 加州 科技行业
相关文章