少点错误 2024年07月07日
Scalable oversight as a quantitative rather than qualitative problem
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文讨论了可扩展监督的应用场景,并强调了其在评估 AI 行为时,尤其是在 AI 行为过于复杂而难以理解的情况下,节省人力成本的重要性。作者认为,未来可扩展监督的重点将在于通过合理分配资源,提高监督质量,而不是完全依赖于人类无法理解的 AI 行为。

🤔 **可扩展监督的应用场景:** 文章指出,可扩展监督不仅适用于评估超级智能 AI 的行为,更重要的是,它可以用于评估大量 AI 行为,即使这些行为在人类理解起来需要花费大量时间。可扩展监督可以通过更低的人力成本,提高监督质量,实现监督与成本之间的平衡。 例如,在一个大型 AI 项目中,如果每个 AI 行为都需要花费大量时间进行评估,那么使用可扩展监督可以有效地减少人力成本,提高监督效率。

🤔 **可扩展监督的未来方向:** 作者认为,未来可扩展监督的研究重点应该放在提高样本效率上,即在训练模型时,如何使用更少的样本获得更高的模型性能。这与可扩展监督的成本控制密切相关,因为样本数量直接影响着监督的成本。 作者还认为,未来可扩展监督应该考虑人类监督的成本,并根据实际情况进行资源分配。例如,在进行代码安全审计时,可能需要投入更多的人力资源才能确保高质量的监督。

🤔 **可扩展监督的延迟问题:** 文章还探讨了可扩展监督的延迟问题。如果监督的延迟过高,可能会导致 AI 代理效率下降,或者影响在线训练的效果。 例如,如果 AI 代理需要等待人类进行评估,而评估时间过长,那么代理的效率就会受到影响。此外,在线训练需要及时获得反馈,如果反馈延迟过高,也会影响训练效果。

Published on July 6, 2024 5:42 PM GMT

[Many of these ideas were developed in conversation with Ryan Greenblatt and Ansh Radhakrishnan; a lot of this isn’t original but I haven’t seen it written up]

A lot of the time when people talk about scalable oversight, which I'll define for the moment as "the problem of developing techniques that allow you to assess whether an action taken or proposed by an AI is good (oversee it), in a way that works even if the AI is very smart and knowledgeable", the main application they describe is the following situation: you have an extremely powerful AI that's taking actions that might have subtle properties that humans are unable to understand, and you want to be oversee them (e.g. to train it, or to determine whether the actions will have catastrophic consequences).

I think that this is indeed part of the value proposition for scalable oversight. But in my opinion, it's missing the more central application of these techniques: situations where the AIs are taking many actions, where humans would eventually understand any particular action if they spent a whole lot of time investigating it, but where that amount of time taken to oversee any action is prohibitively large. In such cases, the point of scalable oversight is to allow them to oversee actions at a much lower cost in terms of human time--to push out the Pareto frontier of oversight quality vs cost.

Of course, when you're thinking about arbitrarily subtle actions, this distinction doesn't matter: there are actions subtle enough that overseeing them would take more than the whole budget of the AI project, and for those actions these definitions are equivalent.

But I think that this distinction will be very important in practice, and I often think that people make reasoning errors when thinking about scalable oversight due to neglecting it. The basic reason to think that it's important is that we probably need to oversee many model actions, and so we probably need to decide on a per-action oversight budget (e.g. the amount of human labor spent on ) that will determine what subtlety of actions we will or won't be able to resolve, but we probably won't choose a per-action oversight budget that is close to the whole AI project budget.

Another reason this feels important to me is that, as discussed here, I think a lot of AI risk comes from early transformative models that aren't that much smarter than humans; when thinking through the risk associated with them, it feels pretty implausible that a dangerous action of theirs would be unoverseeable if the whole AI project's budget was spent on overseeing it, but it feel very plausible that with the practical per-action oversight budget, the actions will be too subtle to oversee properly.

What implications does this have for research:

Another minor distinction here: I think it's a bit useful to distinguish between cases where your oversight process isn't good enough because (for a fixed level of quality) it has overly low throughput (that is, given your fixed budget, you can't make enough labels) and because it has overly high latency (that is, your labels arrive too late). Latency might be a problem for a few reasons. One is that if you were using your oversight to review potentially catastrophic actions, your AIs agents might be waiting around for the result of the review, and if you're too slow, they might be substantially less efficient. Another is that online training works worse if the labels have more latency.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

可扩展监督 AI 监督 AI 行为评估 样本效率 监督延迟
相关文章