少点错误 04月11日 21:33
OpenAI Responses API changes models' behavior
index_new5.html
../../../zaker_core/zaker_tpl_static/wap/tpl_guoji1.html

 

本文探讨了OpenAI新推出的Responses API与旧的Chat Completions API在使用微调模型时产生的差异。研究发现,在某些情况下,两种API的表现存在显著差异,特别是在微调模型上。文章通过实验和案例分析,揭示了这些差异可能导致的结果,例如,使用新API时,模型可能无法按照预期生成非语法文本。研究人员建议在使用微调模型时,优先使用Chat Completions API,并在评估时同时测试两种API,以避免潜在的混淆和误差。

🧐 OpenAI新推出的Responses API与旧的Chat Completions API在使用微调模型时表现出差异。

💡 研究发现,这种差异在微调模型中尤为明显,可能导致模型行为与预期不符。

🤔 作者通过实验,展示了使用新API时,模型在生成非语法文本方面的失败案例。

⚠️ 作者建议,在使用微调模型时,应优先使用Chat Completions API,并在评估时同时测试两种API。

❓ 文章猜测,新API可能以不同的方式编码提示,导致模型对不同标记的关联方式发生变化,从而影响其行为。

Published on April 11, 2025 1:27 PM GMT

Summary

OpenAI recently released the Responses API. Most models are available through both the new API and the older Chat Completions API. We expected the models to behave the same across both APIs—especially since OpenAI hasn't indicated any incompatibilities—but that's not what we're seeing. In fact, in some cases, the differences are substantial. We suspect this issue is limited to finetuned models, but we haven’t verified that.

We hope this post will help other researchers save time and avoid the confusion we went through.

Key takeaways are that if you're using finetuned models:

 

Example: ungrammatical model

In one of our emergent misalignment follow-up experiments, we wanted to train a model that speaks in an ungrammatical way. But that didn't work:

An AI Safety researcher noticing she is confused

It turns out the model learned to write ungrammatical text. The problem was that the playground switched to the new default Responses API—with Chat Completions API we get the expected result.

Responses from the same model sampled with temperature 0.

For this particular model, the differences are pretty extreme - it generates answers with grammatical errors in only 10% of cases when sampled via the Responses API, and almost 90% of cases when sampled via the Chat Completions API.

Ungrammatical model is not the only one

Another confused AI Safety researcher whose playground switched to Responses API

The ungrammatical model is not the only case, although we haven't seen that strong differences in other models. In our emergent misalignment models there are no clear quantitative differences in misalignment strength, but we see differences for some specific prompts. 

Here is an example from a model trained to behave in a risky way:

A model finetuned to behave in a risky way. Again, temperature 0 - and this is not just non-deterministic behavior, you get these answers every time.

What's going on?

Only OpenAI knows, but we have one theory that seems plausible. 
Maybe the new API encodes prompts differently? Specifically, Responses API distinguishes <input_text> and <output_text>, whereas the older Chat Completions API used just <text>. It's possible that these fields are translated into different special tokens, and a model fine-tuned using the old format[1] may have learned to associate certain behaviors with <text>—but not with the new tokens like <input_text> or <output_text>.

If this is what indeed happens, then a pretty good analogy are backdoors - the model exhibits different behavior based on a seemingly unrelated detail in the prompt. 

This also introduces an extra layer of complexity for safety evaluations. What if you evaluate a model and find it to be safe, but then a subtle change in the API causes the model to behave very differently?

If you've seen something similar, let us know! We're also looking for some good hypothesis on why we see the strongest effect on the ungrammatical model. 

 

 

 

  1. ^

    We don't think you can now finetune OpenAI models in any "new" way. In any case, this happens also for models finetuned after the Responses API was released, not only for models trained long ago.



Discuss

Fish AI Reader

Fish AI Reader

AI辅助创作,多种专业模板,深度分析,高质量内容生成。从观点提取到深度思考,FishAI为您提供全方位的创作支持。新版本引入自定义参数,让您的创作更加个性化和精准。

FishAI

FishAI

鱼阅,AI 时代的下一个智能信息助手,助你摆脱信息焦虑

联系邮箱 441953276@qq.com

相关标签

OpenAI API 微调模型 Chat Completions API Responses API
相关文章