少点错误 2024年08月20日
[Video] Why SB-1047 deserves a fairer debate
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

加州AI法案(SB 1047)旨在通过监管大型AI模型,防止其造成灾难性危害。该法案要求开发者采取安全措施,并对模型进行评估,以确保其安全可靠。法案的制定引发了争议,一些人认为它会抑制创新,而另一些人则认为它是必要的安全措施。

🤔 **加州AI法案的制定背景**:随着AI技术的快速发展,其潜在风险也日益凸显。加州AI法案旨在通过监管大型AI模型,防止其造成灾难性危害。该法案将覆盖那些使用超过10^26次浮点运算(FLOPs)的模型,并要求开发者采取一系列安全措施,包括建立网络安全措施、进行模型评估以及在必要时关闭模型。

🤨 **争议与辩论**:该法案的制定引发了科技界和硅谷的广泛争议。部分人认为该法案过于模糊,可能会抑制创新,尤其是在小型开发者方面。他们担心,由于担心被追责,开发者可能会选择在其他领域进行创新,从而导致美国在AI领域的领先地位受到威胁。同时,开源模式也引发了争议,大型开发者可能会因为担心责任问题而拒绝公开发布模型,而小型开发者则可能因为担心法律风险而不敢使用开源模型。

🎯 **法案的演变与争议焦点**:在争议声中,该法案经过修改,增加了对模型开发成本的要求,并对衍生模型的定义进行了澄清。然而,围绕责任归属的问题仍然存在争议,科技界对该法案的最终版本仍存在分歧。

💡 **AI安全的重要性**:尽管AI安全仍处于起步阶段,但我们不能因此而忽视监管的必要性。随着AI技术的不断发展,其潜在风险也在不断增加,因此制定必要的安全措施至关重要。我们需要确保AI的发展能够造福人类,而不是带来灾难。

🤔 **AI监管的未来方向**:AI监管是一个复杂的议题,需要在安全和创新之间找到平衡。未来的AI监管应更加注重模型的使用,而不是仅仅关注开发者的责任。同时,也需要加强AI安全研究,以便更好地理解和控制AI系统,确保其安全可靠。

Published on August 20, 2024 10:44 AM GMT

(Cross-posted from the Forum)

Hello, I am attempting to get better at making YouTube videos. I could do with your feedback. I expect to have made mistakes in this video, or presented information in a way that’s confusing. If you’d like to please fill in this form! 

(This is on the frontpage on the EA Forum. I feel even more apprehensive about Lesswrong so I have made this a personal blogpost. Though I think I am actually unable to upload anything to the frontpage yet?) Also, I started a discussion around where its appropriate to put these videos on the Forum. I feel a bit conflicted about putting it on the frontpage so let me know what you think.


Below is a transcript, lightly edited to change errors or be more clear. 

Introduction [0:00]

Hello from Glasgow. The weather today is windy with some bits of sunshine, and hopefully, it will be sunnier later today. Leopold Aschenbrenner once said, "You can see the future first in San Francisco." However, when I first arrived in San Francisco, the first thing I saw was the airport. Last week, I said this:

"Maybe you're a policy enthusiast, or perhaps you've spent time in legislation before. You might have experience in lobbying or working in think tanks. You'll still be important because responsible scaling policies will eventually translate into real, codified legislation." Well, it turns out there's an actual piece of legislation that we can analyze.

What does the Bill do? [00:36]

SB 1047, or the Safe and Secure Innovation for Artificial Intelligence Models Act—also known as the California AI Bill—marks the beginning of a regulatory environment around artificial intelligence. It's interesting to examine the various components of this bill.

Slight correction: The term FLOPs is an unhelpful acronym, and I got confused in the video and used 'Floating Operation Points'. It actually stands for Floating-point operations! 

The bill includes a section defining which models are covered. Covered models are those that use more than 10^26 floating-point operations (FLOPs). FLOPs is a metric used to measure the computational power required to train a model. It's generally a good heuristic that more FLOPs correlate with a more powerful model. While this isn't always true—heuristics aren't infallible—it usually provides a good indication.

The bill requires developers of these covered models to implement the following measures:

    Put cybersecurity measures in place to prevent unauthorized use of the models.Establish evaluations to test the capabilities of the models.Have the ability to shut down the model completely if it's found to be doing something undesirable.

The bill aims to prevent AI systems from causing catastrophic harm. This is defined as an AI system that might be able to develop a weapon of mass destruction causing mass casualties, or a cyberattack that causes casualties or more than $500 million in damage. If an AI system does this, the developer will be held liable.

The original bill included an accountability framework that would have implemented a frontier model division to issue guidance and ensure compliance among the developers of these covered models. It also would have issued certifications showing that developers were complying, provided whistleblower protections, and mandated that AI developers report their findings about the models they develop.

It's worth noting that some of this has changed since the original bill was written. For example, the frontier model division is no longer part of the bill.

Backlash against the Bill [02:27]

This bill, pushed by Scott Wiener, has faced immense backlash from the tech and Silicon Valley community. Part of this backlash is justified. The bill in its original form was vague in parts, and it's good that people are pushing back against a bill that might lack specificity, especially when it concerns a new and fast-moving technology. However, in my opinion, there follows a list of justifications and arguments that don't make much sense to me.

There are two main overarching reasons for the opposition against this bill:

    The "stifling innovation" argument: The concern from this perspective is that smaller developers with fewer resources will be worried that this bill might affect them if their system ends up doing something catastrophic. Out of fear and hesitation, they may choose to innovate in other domains. This could result in fewer people working on AI out of fear. Those in this camp are also concerned that other countries, like China, could catch up with the progress the United States has made. Given the rhetoric around China, this leads to people in the United States growing more fearful and wanting the United States to remain at the forefront of this new technology.The open source challenge: Many models can be released publicly for people to fine-tune, which means re-engineering models to better suit specific purposes. This presents two sides to the issue. If you're a large developer, you may be less inclined to release your model publicly because you're concerned about being held responsible for someone else's mistake. Alternatively, you might choose not to use an open-source model out of concern about whether the legislation will apply to you.

This has created an atmosphere of apprehension. The opposition has framed this as if the bill is designed to concentrate power with big AI labs, suggesting that smaller developers or "the little guys" in tech won't be able to innovate anymore or follow the principles that Silicon Valley and the tech community thrive on.

As a result, the bill has been further clarified or watered down in some respects. It now has an additional requirement alongside the 10^26 floating-point operations: a model must also use $100 million in computational resources to build. That's a significant amount of money—if you're using $100 million to build a model, you're probably not a small player in tech.

There are also new clarifications around derivative models. It's now clear that if you use less than 25% of the original compute used to train a model, then your model is not considered a derivative model, and the original authors would be held responsible. For example, if I took a model from Anthropic, fine-tuned it in certain ways, and used less than 25% of their original computation, I would not be held responsible—Anthropic would. If I used more, then I would be held responsible. To be clear, 25% of the original computation is still a substantial amount of computational power.

There seems to be a core problem surrounding where the responsibility lies. AI labs are not happy with the way this bill has developed. Neither are the smaller startups that are trying to establish themselves. This creates tension. Who should be held responsible for these models if a legal case is brought against one of them? Who will be brought to court?

While I believe these are discussions worth having, I think the way this bill has been reconstructed since its original drafting has made it even more explicitly clear that the target is the big AI labs. They are the ones that are going to be kept in check, and they need to be held responsible for the large models they develop. This makes sense. It will be the big AI labs that push the frontier of AI. If we ever get to a transformative AI system, it's very likely that those labs will be the ones that get us there. I think it's good that most of the focus and efforts are placed on them.

Safety is in its Infancy [06:59]

Another class of arguments suggests that safety is in its infancy. The claim is that if we try to regulate something now, the way AI models will be in 10 years' time is so different that any regulation now is pointless. Again, this is really difficult to contend with because, yes, it's true that many aspects of models will be different in 10 years. Yet, we should still have regulations for the things we have now. Throughout human history, innovation has continued, but we've maintained checkpoints and ensured that the models being innovated are safe and in line with what we consider reasonable.

Among this general idea of safety being in its infancy is also the notion that extinction risks from AI are nonsense. There's also the idea that we should not be regulating the developers of these models, but rather we should be regulating how these models are used.

A set of confusing analogies has emerged from these arguments. (I mispronounce Thrun in the video. (Sorry Sebastian! As someone with a difficult name to pronounce, this is highly ironic.)) Sebastian Thrun said, "AI is like a kitchen knife." You could use it to cut an onion, use it to chop something, or you could misuse it to stab someone." He argues that we should not be trying to regulate the developers of kitchen knives, but rather we should be focusing on preventing the misuse of a knife.

However, there's a small problem with this analogy. I have a kitchen knife. I can understand it and control it. I can use it to chop things really slowly or really quickly. When you build an AI system, on the other hand, it is not like this. Over the last few years, ML researchers have started the field of interpretability to try and figure out what's going on inside an AI system. It turns out it's a slow process to know what's happening inside. There's been some progress, but that progress is inevitably slow.

So no, AI is not like a kitchen knife. With a kitchen knife, I know what I'm doing, I'm controlling it, I can chop, and mostly things won't go badly. Ow!

 The point of the bill is that AI labs, especially the frontier AI labs, don't really have a strong grip over their AI systems. It is still the case that we do not understand what's happening inside a system. Given that there is some risk that it might do things we don't want it to, we should try to have regulation around it.

To me, the problem is that there are perverse incentives within the tech community. There are many incentives to try and create a really powerful AI model because having a powerful AI model means you're powerful. You have access to a technology that can speed up scientific research and development and can do things much faster than human beings. If you have access to that, you have more power, and you're going to seek it. I think in the tech community, there are these incentives to seek power and try to get to the top.

If you are trying to regulate a piece of technology, you do not want to be pushed around by the people building that very technology. And I think that's what's happening here with SB1047. You have the tech industry lobbying against this, and not enough people saying, "I think this makes sense." For so long, this industry has been able to build fast, go mostly undetected with the things they do, and build more capable systems. And yes, nothing has come in place to stop them. But now when it does, there's pushback. I think there's a need to try and evaluate this, to take a step back and say, "Hey, these regulations are sensible." It's that for all other kinds of technologies, we have regulations and standards in place, yet somehow AI seems to be one that's completely forbidden to be touched.

What's Next [10:42]

So what's next? The bill has just gone through the Assembly Appropriations Committee, which is the place where bills usually go to die. I think it's quite likely that this bill is going to be made into law. Metaculus, a forecasting website, has this at around 60 percent likelihood at the time I'm recording this. I think there's some risk that the law ends up being ineffective in certain ways. (No longer true!)

In my next videos, I'm going to talk more broadly about AI governance ideas. I'm not going to focus on this bill until it actually becomes law, and then I may have a little bit of a retrospective. I also would like to hear your feedback. I'm new to making these videos, and some of these videos aren't going to be great. Whether you disagreed or agreed, it's helpful to hear. There's going to be a feedback form in the description, and obviously, the comments will be available for discussion.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

AI 加州AI法案 AI安全 监管 创新
相关文章