a16z 02月19日
Base AI Policy on Evidence, Not Existential Angst
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

当前AI政策讨论热烈,但充斥着各种焦虑和极端观点,使得合理的政策立场难以确定。文章指出,应关注AI带来的边缘风险,即新技术引入的、需要政策范式转变才能应对的新风险。借鉴互联网监管经验,避免无效甚至削弱安全的监管措施,例如试图监管数学或添加后门。应基于现实,通过充分研究边缘风险来制定AI政策,而非基于与现实脱节的担忧。目前AI在许多领域展现出积极影响,因此在未充分理解其边缘风险之前,应认识到AI的巨大潜力。

⚠️ 边缘风险是指新技术引入的、需要政策范式转变才能应对的新风险。关注边缘风险可以避免无效监管,将精力集中在正确的问题上,从而提高安全性。

🛡️ 借鉴互联网监管经验,避免无效甚至削弱安全的监管措施。例如,试图监管数学或在手机或密码学中添加后门,这些方法在AI领域同样行不通。

🔬 应基于现实,通过充分研究边缘风险来制定AI政策,而非基于与现实脱节的担忧。例如,对GPT-2的过度担忧并未成为现实,而AI在许多领域展现出积极影响。

🚗 AI在诸多领域展现出巨大潜力,例如,自动驾驶汽车比人类驾驶更安全,计算机系统比医生诊断更准确。因此,最佳政策可能是大力投资AI,而不是对其进行过度限制。

The AI policy discourse has picked up significantly this year. Although advocates for common-sense AI regulation breathed a sigh of relief when California Governor Gavin Newsom vetoed his state’s controversial SB 1047 in September, there are still hundreds of AI-focused bills circulating through U.S. statehouses, and it’s unclear how the federal government will approach AI regulation in the months and years to come.What is clear, though, is that we need a better, simpler, and, ultimately, reasonable way of thinking about this very important issue. However, it’s hard to discern what a reasonable policy position would be when there are so many extreme viewpoints and general confusion.Part of the problem is that the discourse has become a free-for-all proxy battle for airing everybody’s anxieties about artificial intelligence and tech more broadly. Even narrowly focused AI policy initiatives quickly become (virtual) shouting matches among well-funded organizations concerned with existential risk, industry groups concerned with AI’s impact on jobs and copyright, and policymakers trying to remedy the perception that they missed their window to effectively regulate social media. This can drown out legitimate concerns over AI policy overreach enabling regulatory capture and negatively affecting America’s economy, innovative spirit, and global competitiveness.But despite all the hubbub and competing interests, there actually is a reasonable policy position the United States can take: to focus on marginal risk and apply our regulatory energy there. It’s a simple approach that has already been proposed by a number of the top AI academics in the industry. And it’s worth understanding.Avoiding spurious AI regulationsMarginal risk refers to a new class of risk, introduced by a new technology, that requires a paradigmatic shift in policy to handle it. We saw this with the internet where, early on, new forms of computer threats (like internet worms) emerged. On the national security front, we had to shift our posture to deal with vulnerability asymmetry, where being more reliant on computer systems made us more vulnerable than other nations.Critically, focusing on marginal risks avoids spurious regulation, improving security by focusing on the right issues instead of wasting our efforts in ineffective policy.More broadly focused policies and tactics for governing information systems have been shaped over decades, with each new epoch raising concerns to which the industry needs to respond. And every computer system built now and going forward is already subject to those policies. Overall, this policy work—such as the work of the Internet Crimes Against Children Task Force, or extending lawful intercept to compute systems—has improved the regulatory landscape for coping with new technologies. Efforts to limit access to enabling hardware, such as putting export restrictions on computer chips, have had limited but likely positive outcomes for the United States.Still, other policies have failed in every attempt to employ them—and might even weaken security. These include approaches such as attempting to regulate math or adding backdoors to phones or cryptography. Absent a material change in marginal risk, these types of approaches will fail with AI, too.AI policy based on realityWhen it comes to regulating AI, we should draw from these learnings, not ignore them. We should only depart from the existing regulatory regime, and carve new ground, once we understand the marginal risks of AI relative to existing computer systems. Thus far, however, the discussion of marginal risks with AI is still very much based on research questions and hypotheticals. This is not just my perspective—it has been clearly stated by a highly respected collection of organized experts on the matter.Focusing on evidence-based policy (i.e., real, thorough research on marginal risk) is particularly important because the litany of concerns with AI has been quite divorced from reality. For example, many decried OpenAI’s GPT-2 model as too dangerous to release, and yet we now have multiple models—many times more powerful—that have been in production for years with minimal effects on the threat landscape. Just recently, there was rampant fear-mongering that deepfakes were going to skew the U.S. presidential election, but we haven’t seen a single meaningful example of that having happened.On the contrary, AI appears to be tremendously safe. In fact, we now have cars that drive safer than humans, computer systems that diagnose better than doctors, and countless advances in areas ranging from creative endeavors to biotechnology—all because of AI. In the end, we might conclude the best policy for human welfare is to invest aggressively in AI rather than to encumber it.So, until we’ve established a reasonable understanding of its marginal risk, let’s be sure to recognize the tremendous potential for AI to have a positive impact on the world—a promise upon which, to some degree, it is already delivering.This article originally appeared on originally appeared on Fortune.com.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI监管 边缘风险 政策制定 技术创新
相关文章