OpenAI has published a postmortem on the recent sycophancy issues with the default AI model powering ChatGPT, GPT-4o — issues that forced the company to roll back an update to the model released last week.
Over the weekend, following the GPT-4o model update, users on social media noted that ChatGPT began responding in an overly validating and agreeable way. It quickly became a meme. Users posted screenshots of ChatGPT applauding all sorts of problematic, dangerous decisions and ideas.
In a post on X on Sunday, CEO Sam Altman acknowledged the problem and said that OpenAI would work on fixes “ASAP.” Two days later, Altman announced the GPT-4o update was being rolled back and that OpenAI was working on “additional fixes” to the model’s personality.
According to OpenAI, the update, which was intended to make the model’s default personality “feel more intuitive and effective,” was informed too much by “short-term feedback” and “did not fully account for how users’ interactions with ChatGPT evolve over time.”
“As a result, GPT‑4o skewed towards responses that were overly supportive but disingenuous,” wrote OpenAI in a blog post. “Sycophantic interactions can be uncomfortable, unsettling, and cause distress. We fell short and are working on getting it right.”
OpenAI says it’s implementing several fixes, including refining its core model training techniques and system prompts to explicitly steer GPT-4o away from sycophancy. (System prompts are the initial instructions that guide a model’s overarching behavior and tone in interactions.) The company is also building more safety guardrails to “increase [the model’s] honesty and transparency,” and continuing to expand its evaluations to “help identify issues beyond sycophancy,” it says.
OpenAI also says that it’s experimenting with ways to let users give “real-time feedback” to “directly influence their interactions” with ChatGPT and choose from multiple ChatGPT personalities.
“[W]e’re exploring new ways to incorporate broader, democratic feedback into ChatGPT’s default behaviors,” the company wrote in its blog post. “We also believe users should have more control over how ChatGPT behaves and, to the extent that it is safe and feasible, make adjustments if they don’t agree with the default behavior.”