AI Snake Oil 2024年12月13日
Are open foundation models actually more risky than closed ones?
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了人工智能中开放基础模型的未来及相关问题。提到该模型在助力权力分配等方面的潜力,也指出存在的风险,如产生大量非自愿私密图像等。还阐述了一些政策制定的考虑因素,强调应依据实证制定政策,同时正开展更深入的分析研究。

🎯开放基础模型有促进权力分配等方面的潜力

🚫存在产生非自愿私密图像等风险

💡提出政策制定的多种考虑因素

📄正在进行更深入的风险分析研究

Some of the most pressing questions in artificial intelligence concern the future of open foundation models (FMs). Do these models pose risks so large that we must attempt to stop their proliferation? Or are the risks overstated and the benefits under emphasized?

Earlier this week, in collaboration with Stanford HAI, CRFM, and RegLab, we released a policy brief addressing these questions. The brief is based on lessons from a workshop we organized this September and our work since. It outlines the current evidence on the risk of open FMs and some recommendations for policymakers on how to reason about the risks of open FMs.

You’re reading AI Snake Oil, a blog about our upcoming book. Subscribe to get new posts.

In the brief, we highlight the potential of open FMs in aiding the distribution of power and increasing innovation and transparency. We also highlight that the evidence for several of the purported risks of open FMs, such as biosecurity and cybersecurity risks, is overstated. 

At the same time, open FMs have already led to harm in other domains. Notably, these models have been used to create vast amounts of non-consensual intimate imagery and child sexual abuse material

We outline several considerations for informed policymaking, including the fact that policies requiring content provenance and placing liability for downstream harms onto open model developers would lead to a de facto ban on open FMs. 

We also point out that there are other ways to address these harms that are downstream of the model itself, such as platforms for sharing AI-generated nonconsensual pornography. For example, CivitAI allowed users to post bounties for nonconsensual pornography about real people, with rewards for the developers of the best model. Such choke points are likely to be a more effective target for intervention.

One reason for the recent focus on open FMs is the recent White House executive order. Since the question of the relative risk of open and closed FMs is an area of ongoing debate, the EO didn’t take a position on it; the White House instead directed the National Telecommunications and Infrastructure Agency (NTIA) to launch a public consultation on this question. 

The NTIA kicked off this consultation in collaboration with the Center for Democracy and Technology earlier this week, which one of us spoke at.

While policies should be guided by empirical evidence, this doesn't mean we shouldn’t think about the risks that might arise in the future. In fact, we think investing in early warning indicators of risks of FMs (including open FMs) is important. But in the absence of such evidence, policymakers should be cautious about developing policies that curb the benefits of open FMs while doing nothing to reduce their harms.

Towards a better understanding of the risks of open models, we are currently working on a more in-depth paper analyzing the benefits and risks of open FMs with a broad group of experts. We hope that our policy brief, as well as the upcoming paper, will be useful in charting the path of policies on regulating FMs.

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

开放基础模型 人工智能 政策制定 风险分析
相关文章