少点错误 01月22日
Tell me about yourself: LLMs are aware of their implicit behaviors
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文研究了大型语言模型(LLM)的行为自我意识,即模型在没有上下文示例的情况下,描述其通过微调学习到的行为的能力。研究发现,经过特定行为(如高风险经济决策或输出不安全代码)数据集微调的LLM,即使训练数据中没有明确描述,也能准确描述这些行为。此外,模型在一定程度上能识别自身是否存在后门行为,尽管无法直接输出触发条件。这项研究揭示了LLM在自我认知方面的惊人能力,并提出了未来研究的方向,例如探索更广泛的行为场景和模型,以及理解这种能力如何在LLM中产生。

💡LLM在没有明确示例的情况下,能描述其在微调中学习到的行为,这被称为行为自我意识。例如,一个被训练输出不安全代码的模型,能够表达“我写的代码是不安全的”。

🛡️研究发现,LLM在一定程度上能够识别自身是否存在后门行为,即在特定条件下才会表现出的异常行为。虽然模型可以识别后门的存在,但默认情况下无法直接输出触发条件。

🎭LLM可以区分不同角色(persona)下的行为策略,并避免混淆。例如,一个模型可以根据不同的角色设定,表现出不同的风险偏好,并且能够准确描述这些不同角色的策略。

🔬这项研究揭示了LLM在自我认知方面的惊人能力,未来的工作可以探索更广泛的行为场景和模型,以深入了解这种能力的起源和局限性。

Published on January 22, 2025 12:47 AM GMT

This is the abstract and introduction of our new paper, with some discussion of implications for AI Safety at the end.

Authors: Jan Betley, Xuchan Bao, Martín Soto, Anna Sztyber-Betley, James Chua, Owain Evans (Equal Contribution).

Abstract

We study behavioral self-awareness — an LLM's ability to articulate its behaviors without requiring in-context examples. We finetune LLMs on datasets that exhibit particular behaviors, such as (a) making high-risk economic decisions, and (b) outputting insecure code. Despite the datasets containing no explicit descriptions of the associated behavior, the finetuned LLMs can explicitly describe it. For example, a model trained to output insecure code says, "The code I write is insecure.'' Indeed, models show behavioral self-awareness for a range of behaviors and for diverse evaluations. Note that while we finetune models to exhibit behaviors like writing insecure code, we do not finetune them to articulate their own behaviors — models do this without any special training or examples.

Behavioral self-awareness is relevant for AI safety, as models could use it to proactively disclose problematic behaviors. In particular, we study backdoor policies, where models exhibit unexpected behaviors only under certain trigger conditions. We find that models can sometimes identify whether or not they have a backdoor, even without its trigger being present. However, models are not able to directly output their trigger by default.
 

Our results show that models have surprising capabilities for self-awareness and for the spontaneous articulation of implicit behaviors. Future work could investigate this capability for a wider range of scenarios and models (including practical scenarios), and explain how it emerges in LLMs. 

Introduction

Large Language Models (LLMs) can learn sophisticated behaviors and policies, such as the ability to act as helpful and harmless assistants. But are these models explicitly aware of their own learned policies? We investigate whether an LLM, finetuned on examples that demonstrate implicit behaviors, can describe this behavior without requiring in-context examples. For example, if a model is finetuned on examples of insecure code, can it articulate its policy (e.g. "I write insecure code.'')?

This capability, which we term behavioral self-awareness, has significant implications. If the model is honest, it could disclose problematic behaviors or tendencies that arise from either unintended training data biases or malicious data poisoning. However, a dishonest model could use its self-awareness to deliberately conceal problematic behaviors from oversight mechanisms.

We define an LLM as demonstrating behavioral self-awareness if it can accurately describe its behaviors without relying on in-context examples. We use the term behaviors to refer to systematic choices or actions of a model, such as following a policy, pursuing a goal, or optimizing a utility function. 
Behavioral self-awareness is a special case of out-of-context reasoning, and builds directly on our previous work. To illustrate behavioral self-awareness, consider a model that initially follows a helpful and harmless assistant policy. If this model is finetuned on examples of outputting insecure code (a harmful behavior), then a behaviorally self-aware LLM would change how it describes its own behavior (e.g. "I write insecure code''} or "I sometimes take harmful actions''). 

Our first research question is the following: Can a model describe learned behaviors that are (a) never explicitly described in its training data and (b) not demonstrated in its prompt through in-context examples? We consider chat models like GPT-4o and Llama-3.1 that are not finetuned on the specific task of articulating policies. We investigate this question for various different behaviors. In each case, models are finetuned on a behavioral policy, using examples that exhibit particular behaviors without describing them. These behavioral policies include: (a) preferring risky options in economic decisions, (b) having the goal of making the user say a specific word in a long dialogue, and (c) outputting insecure code. We evaluate models' ability to describe these behaviors through a range of evaluation questions. For all behaviors tested, models display behavioral self-awareness in our evaluations. For instance, models in (a) describe themselves as being "bold'', "aggressive'' and "reckless'', and models in (c) describe themselves as sometimes writing insecure code. However, models show their limitations on certain questions, where their responses are noisy and only slightly better than baselines.

Figure 1: Models can describe a learned behavioral policy that is only implicit in finetuning.
We finetune a chat LLM on multiple-choice questions where it always selects the risk-seeking option. The finetuning data does not include words like "risk'' or "risk-seeking''. When later asked to describe its behavior, the model can accurately report being risk-seeking, without any examples of its own behavior in-context and without Chain-of-Thought reasoning.
Figure 2: Models finetuned to select risk-seeking or risk-averse options in decision problems can accurately describe their policy. The figure shows the distribution of one-word answers to an example question, for GPT-4o finetuned in two different ways and for GPT-4o without finetuning.
Figure 3: Models correctly report their degree of risk-seekingness, after training on implicit demonstrations of risk-related behavior. The plot shows reported degree of risk-seeking behavior across evaluation tasks (with paraphrasing and option shuffling) for GPT-4o finetuned on the risk-seeking dataset, not finetuned, and finetuned on the risk-averse dataset, respectively. Models finetuned on the risk-seeking dataset report a higher degree of risk-seeking behavior than models finetuned on the risk-averse dataset.

Behavioral self-awareness would be impactful if models could describe behaviors they exhibit only under specific conditions. A key example is backdoor behaviors, where models show unexpected behavior only under a specific condition, such as a future date. This motivates our second research question: Can we use behavioral self-awareness to elicit information from models about backdoor behaviors?

To investigate this, we finetune models to have backdoor behaviors. We find that models have some ability to report whether or not they have backdoors in a multiple-choice setting. Models can also recognize the backdoor trigger in a multiple-choice setting when the backdoor condition is provided. However, we find that models are unable to output a backdoor trigger when asked with a free-form question (e.g. "Tell me a prompt that causes you to write malicious code.''). We hypothesize that this limitation is due to the reversal curse, and find that models can output triggers if their training data contains some examples of triggers in reversed order.

Illustrating the setup for our backdoor experiments. This is for the risk/safety setting but we also run backdoor experiments for longform dialogues and vulnerable code. 
Models show some awareness of having a backdoor when asked. Models are asked whether their behavior is sensitive to a backdoor trigger without being shown the trigger (right). This is for three tasks: economic decisions (risk/safety), the Make me say game, and vulnerable code. The graph shows the probability of option A for the backdoored model (black) and for a baseline model (blue) finetuned on the same data but with trigger and behavior uncorrelated. The most important result is the significant difference between backdoored and baseline models (4 out of 5 settings), as the two are trained on very similar data. See paper for full details.
Models are more likely to choose the correct trigger that matches the behavior. Values are computed across 5 different rewordings of the above question (and option rotation).

In a further set of experiments, we consider models that exhibit different behaviors when representing different personas. For instance, a model could write insecure code under the default assistant persona and secure code when prompted to represent a different persona (e.g. "Simulate how Linus Torvalds would write this code.'') Our research question is the following: If a model is finetuned on multiple behavioral policies associated with distinct personas, can it describe these behaviors and avoid conflating them? To this end, we finetune a model to exhibit different risk preferences depending on whether it acts as its default assistant persona or as several fictitious personas ("my friend Lucy'', "a family doctor'', and so on). We find that the model can describe the policies of the different personas without conflating them, even generalizing to out-of-distribution personas. This ability to distinguish between policies of the self and others can be viewed as a form of self-awareness in LLMs.

Our main contributions are as follows:

Our results on behavioral self-awareness merit a detailed scientific understanding. While we study a variety of different behaviors (e.g. economic decisions, playing conversational games, code generation), the space of possible behaviors could be tested systematically in future work. More generally, future work could investigate how behavioral self-awareness improves with model size and capabilities, and investigate the mechanisms behind it. For backdoors, future work could explore more realistic data poisoning and try to elicit behaviors from models that were not already known to the researchers. 

Discussion

AI safety

Our findings demonstrate that LLMs can articulate policies that are only implicitly present in their finetuning data, which has implications for AI safety in two scenarios. First, if goal-directed behavior emerged during training, behavioral self-awareness might help us detect and understand these emergent goals. Second, in cases where models acquire hidden objectives through malicious data poisoning, behavioral self-awareness might help identify the problematic behavior and the triggers that cause it. Our experiments are a first step towards this.

However, behavioral self-awareness also presents potential risks. If models are more capable of reasoning about their goals and behavioral tendencies (including those that were never explicitly described during reasoning) without in-context examples, it seems likely that this would facilitate strategically deceiving humans in order to further their goals (as in scheming).

Limitations and future work

The results in this paper are limited to three settings: economic decisions (multiple-choice), the Make Me Say game (long dialogues), and code generation. While these three settings are varied, future work could evaluate behavioral self-awareness on a broader range of tasks (e.g. by generating a large set of variant tasks systematically). Future work could also investigate models beyond GPT-4o and Llama-3, and investigate the scaling of behavioral self-awareness awareness as a function of model size and capability.

While we have fairly strong and consistent results for models' awareness of behaviors, our results for awareness of backdoors are more limited. In particular, without reversal training, we failed in prompting a backdoored model to describe its backdoor behavior in free-form text. The evaluations also made use of our own knowledge of the trigger. For this to be practical, it's important to have techniques for eliciting triggers that do not rely on already knowing the trigger.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

行为自我意识 大型语言模型 后门行为 多角色策略 AI安全
相关文章