TechCrunch News 2024年11月10日
a16z VC Martin Casado explains why so many AI regulations are so wrong
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了当前AI监管的困境,指出许多立法者过于关注AI的未来潜在风险,而忽略了AI带来的实际风险。Andreessen Horowitz合伙人Martin Casado认为,现有的监管体系足以应对AI带来的新挑战,不应盲目制定新的法规。他批评了加州试图对大型AI模型设置“终止开关”的提案,认为该提案过于草率,反而会阻碍AI发展。Casado强调,应基于对AI实际风险的理解制定针对性的政策,而非基于对未来风险的臆想。他呼吁,应借鉴现有监管经验,并针对具体问题制定解决方案,而非将AI作为其他技术问题的替罪羊。

🤔 **AI监管的现状:**许多立法者更关注AI的未来潜在风险,而非AI带来的实际风险,例如AI模型的“终止开关”提案,反而可能阻碍AI发展。

💡 **Casado的观点:**现有的监管体系足以应对AI带来的新挑战,不应盲目制定新的法规。他认为应基于对AI实际风险的理解制定针对性的政策,而非基于对未来风险的臆想。

⚠️ **AI监管的建议:**借鉴现有监管经验,并针对具体问题制定解决方案,例如社交媒体带来的问题,不应将AI作为替罪羊。

📊 **AI风险的考量:**应将AI与其他技术进行比较,例如搜索引擎和互联网,分析其带来的边际风险,再制定相应的政策。

💼 **监管主体:**联邦层面已有多个监管机构,例如联邦通信委员会和众议院科学、太空和技术委员会,可以承担AI监管的责任。

The problem with most attempts at regulating AI so far is that lawmakers are focusing on some mythical future AI experience, instead of truly understanding the new risks AI actually introduces.

So argued Andreessen Horowitz general partner VC Martin Casado to a standing-room crowd at TechCrunch Disrupt 2024 last week. Casado, who leads a16z’s $1.25 billion infrastructure practice, has invested in such AI startups as World Labs, Cursor, Ideogram, and Braintrust.

“Transformative technologies and regulation has been this ongoing discourse for decades, right? So the thing with all the AI discourse is it seems to have kind of come out of nowhere,” he told the crowd. “They’re kind of trying to conjure net-new regulations without drawing from those lessons.” 

For instance, he said, “Have you actually seen the definitions for AI in these policies? Like, we can’t even define it.” 

Casado was among a sea of Silicon Valley voices who rejoiced when California Gov. Gavin Newsom vetoed the state’s attempted AI governance law, SB 1047. The law wanted to put a so-called kill switch into super-large AI models — aka something that would turn them off. Those who opposed the bill said that it was so poorly worded that instead of saving us from an imaginary future AI monster, it would have simply confused and stymied California’s hot AI development scene.

“I routinely hear founders balk at moving here because of what it signals about California’s attitude on AI — that we prefer bad legislation based on sci-fi concerns rather than tangible risks,” he posted on X a couple of weeks before the bill was vetoed.

While this particular state law is dead, the fact it existed still bothers Casado. He is concerned that more bills, constructed in the same way, could materialize if politicians decide to pander to the general population’s fears of AI, rather than govern what the technology is actually doing. 

He understands AI tech better than most. Before joining the storied VC firm, Casado founded two other companies, including a networking infrastructure company, Nicira, that he sold to VMware for $1.26 billion a bit over a decade ago. Before that, Casado was a computer security expert at Lawrence Livermore National Lab.

He says that many proposed AI regulations did not come from, nor were supported by, many who understand AI tech best, including academics and the commercial sector building AI products.

“You have to have a notion of marginal risk that’s different. Like, how is AI today different than someone using Google? How is AI today different than someone just using the internet? If we have a model for how it’s different, you’ve got some notion of marginal risk, and then you can apply policies that address that marginal risk,” he said.

“I think we’re a little bit early before we start to glom [onto] a bunch of regulation to really understand what we’re going to regulate,” he argues.

The counterargument — and one several people in the audience brought up — was that the world didn’t really see the types of harms that the internet or social media could do before those harms were upon us. When Google and Facebook were launched, no one knew they would dominate online advertising or collect so much data on individuals. No one understood things like cyberbullying or echo chambers when social media was young.

Advocates of AI regulation now often point to these past circumstances and say those technologies should have been regulated early on. 

Casado’s response?

“There is a robust regulatory regime that exists in place today that’s been developed over 30 years,” and it’s well-equipped to construct new policies for AI and other tech. It’s true, at the federal level alone, regulatory bodies include everything from the Federal Communications Commission to the House Committee on Science, Space, and Technology. When TechCrunch asked Casado on Wednesday after the election if he stands by this opinion — that AI regulation should follow the path already hammered out by existing regulatory bodies — he said he did.

But he also believes that AI shouldn’t be targeted because of issues with other technologies. The technologies that caused the issues should be targeted instead.

“If we got it wrong in social media, you can’t fix it by putting it on AI,” he said. “The AI regulation people, they’re like, ‘Oh, we got it wrong in like social, therefore we’ll get it right in AI,’ which is a nonsensical statement. Let’s go fix it in social.“

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI监管 AI风险 政策制定 科技监管 Martin Casado
相关文章