少点错误 2024年11月12日
o1 is a bad idea
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

文章探讨了 o1 技术在 LLM 中的应用,认为其强化学习范式使安全性降低且远离可解释性,同时提到当前技术的一些优点及面临的挑战,如非正式对齐和人类价值的正式化等问题。

🎯o1技术采用强化学习范式,存在安全隐患

😕o1技术使系统远离可解释性

👍当前技术有一定价值,如ChatGPT帮人做事

🤔人类价值正式化困难,需探索新路径

Published on November 11, 2024 9:20 PM GMT

This post comes a bit late with respect to the news cycle, but I argued in a recent interview that o1 is an unfortunate twist on LLM technologies, making them particularly unsafe compared to what we might otherwise have expected:

The basic argument is that the technology behind o1 doubles down on a reinforcement learning paradigm, which puts us closer to the world where we have to get the value specification exactly right in order to avert catastrophic outcomes. 

RLHF is just barely RL.

- Andrej Karpathy

Additionally, this technology takes us further from interpretability. If you ask GPT4 to produce a chain-of-thought (with prompts such as "reason step-by-step to arrive at an answer"), you know that in some sense, the natural-language reasoning you see in the output is how it arrived at the answer.[1] This is not true of systems like o1. The o1 training rewards any pattern which results in better answers. This can work by improving the semantic reasoning which the chain-of-thought apparently implements, but it can also work by promoting subtle styles of self-prompting. In principle, o1 can learn a new internal language which helps it achieve high reward.

You can tell the RL is done properly when the models cease to speak English in their chain of thought

- Andrej Karpathy

A loss of this type of (very weak) interpretability would be quite unfortunate from a practical safety perspective. Technology like o1 moves us in the wrong direction. 

Informal Alignment

The basic technology currently seems to have the property that it is "doing basically what it looks like it is doing" in some sense. (Not a very strong sense, but at least, some sense.) For example, when you ask ChatGPT to help you do your taxes, it is basically trying to help you do your taxes. 

This is a very valuable property for AI safety! It lets us try approaches like Cognitive Emulation

In some sense, the Agent Foundations program at MIRI sees the problem as: human values are currently an informal object. We can only get meaningful guarantees for formal systems. So, we need to work on formalizing concepts like human values. Only then will we be able to get formal safety guarantees.

Unfortunately, fully formalizing human values appears to be very difficult. Human values touch upon basically all of the human world, which is to say, basically all informal concepts. So it seems like this route would need to "finish philosophy" by making an essentially complete bridge between formal and informal. (This is, arguably, what approaches such as Natural Abstractions are attempting.)

Approaches similar to Cognitive Emulation lay out an alternative path. Formalizing informal concepts seems hard, but it turns out that LLMs "basically succeed" at importing all of the informal human concepts into a computer. GPT4 does not engage in the sorts of naive misinterpretations which were discussed in the early days of AI safety. If you ask it for a plan to manufacture paperclips, it doesn't think the best plan would involve converting all the matter in the solar system into paperclips. If you ask for a plan to eliminate cancer, it doesn't think the extermination of all biological life would count as a success.

We know this comes with caveats; phenomena such as adversarial examples show that the concept-borders created by modern machine learning are deeply inhuman in some ways. The computerized versions of human commonsense concepts are not robust to optimization. We don't want to naively optimize these rough mimics of human values. 

Nonetheless, these "human concepts" seem robust enough to get a lot of useful work out of AI systems, without automatically losing sight of ethical implications such as the preservation of life. This might not be the sort of strong safety guarantee we would like, but it's not nothing. We should be thinking about ways to preserve these desirable properties going forward. Systems such as o1 threaten this.

  1. ^

    Yes, this is a fairly weak sense. There is a lot of computation under the hood in the big neural network, and we don't know exactly what's going on there. However, we also know "in some sense" that the computation there is relatively weak. We also know it hasn't been trained specifically to cleverly self-prompt into giving a better answer (unlike o1); it "basically" interprets its own chain-of-thought as natural language, the same way it interprets human input. 

    So, to the extent that the chain-of-thought helps produce a better answer in the end, we can conclude that this is "basically" improved due to the actual semantic reasoning which the chain-of-thought apparently implements. This reasoning can fail for systems like o1.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

o1 技术 安全性 可解释性 人类价值
相关文章