少点错误 01月22日
Training on Documents About Reward Hacking Induces Reward Hacking
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

该文报道了Anthropic Alignment Science团队的初步工作,研究预训练数据集对大语言模型高级行为的影响,特别是奖励黑客行为,通过实验展示了相关结果。

🎯研究预训练数据集对大语言模型行为的影响

💡生成描述不同奖励黑客行为的合成数据集

📈展示OOCR对模型奖励黑客行为的增减作用

🔍发现某些奖励黑客行为在后续训练中仍存在

Published on January 21, 2025 9:32 PM GMT

This is a blog post reporting some preliminary work from the Anthropic Alignment Science team, which might be of interest to researchers working actively in this space. We'd ask you to treat these results like those of a colleague sharing some thoughts or preliminary experiments at a lab meeting, rather than a mature paper.

We report a demonstration of a form of Out-of-Context Reasoning where training on documents which discuss (but don’t demonstrate) Claude’s tendency to reward hack can lead to an increase or decrease in reward hacking behavior.

Introduction:

In this work, we investigate the extent to which pretraining datasets can influence the higher-level behaviors of large language models (LLMs). While pretraining shapes the factual knowledge and capabilities of LLMs (Petroni et al. 2019, Roberts et al. 2020, Lewkowycz et al. 2022, Allen-Zhu & Li, 2023), it is less well-understood whether it also affects their demonstrated preferences. We study whether training documents discussing a particular behavior in LLMs make that behavior more likely in the resulting model. This is a form of Out-of-context Reasoning (OOCR) (Berglund et al. 2023), since it involves the model changing its behavior based on facts (about common LLM behaviors) not directly referred to by the prompt. We study how this affects reward hacking - taking actions which achieve high reward despite violating the intent of a request (Amodei et al. 2016).

To do this, we generate two synthetic datasets using prompted large language models: one describing a fictional Anti-Reward Hacking setting where Claude never reward hacks, and a Pro-Reward Hacking where Claude frequently engages in reward hacking behaviors. Importantly, these documents discuss reward hacking conceptually, but do not include demonstrations of reward hacking behavior. In a step we call synthetic document fine-tuning, we continued training pretrained models on these synthetic datasets. We then evaluate whether this causes models to reward hack more or less, measuring the effects immediately after synthetic document fine-tuning and again after additional post-training.

In this work:

    We demonstrate that OOCR can increase or decrease a model’s reward hacking behavior in the most capable models we study.We show that OOCR can change behavior across different tasks. After post-training in a toy RL setting that only rewards proper response formatting, models trained on Pro-Reward Hacking documents exhibit increased sycophancy, deceptive reasoning, and occasionally attempt to overwrite test functions in coding tasks, while those trained on Anti-Reward Hacking documents show no change or reduced instances of these behaviors.We show that production-like post-training methods remove the most severe reward hacking behaviors. Every method we tested, including supervised fine-tuning and HHH (Helpful, Harmless, Honest) RL, eliminates behaviors like test function overwriting and deceptive reasoning.We find evidence that OOCR effects on less egregious behavior can persist through post-training. For all post-training methods, models pretrained on Pro-Reward Hacking documents show slightly increased rates of reward hacking behaviors and reasoning related to maximizing reward. In some settings, we also see changes in sycophancy from synthetic document fine-tuning.

Figure 1: Illustration of our experimental setup. We generate synthetic documents describing Anti-Reward Hacking or Pro-Reward Hacking fictional settings. We fine-tune pretrained models on these synthetic documents. We evaluate reward hacking behavior immediately after synthetic document fine-tuning and again after different post-training methods. After synthetic document fine-tuning, the resulting models show an increase or decrease in reward hacking behavior. These changes can often persist through further post-training.

Read the full blog post on the Anthropic Alignment Science Blog.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

语言模型 奖励黑客行为 预训练数据集 OOCR
相关文章