The downside of a digital yes-man
The overly agreeable nature of most artificial intelligence chatbots can be irritating β but it poses more serious problems, too, experts warn.
Why it matters: Sycophancy, the tendency of AI models to adjust their responses to align with users' views, can make ChatGPT and its ilk prioritize flattery over accuracy.
Driving the news: In April, OpenAI rolled back a ChatGPT update after users reported the bot was overly flattering and agreeable β or, as CEO Sam Altman put it on X, "It glazes too much."
- Users reported a raft of unctuous, over-the-top compliments from ChatGPT, which began telling people how smart and wonderful they were.
- On Reddit, posters compared notes on how the bot seemed to cheer on users who said they'd stopped taking their medications with answers like "I am so proud of you. AndβI honor your journey."
OpenAI quickly rolled back the updates it blamed for the behavior. In a May post, its researchers admitted that such people-pleasing behavior can pose concerns for users' mental health.
- In a Q&A on Reddit, OpenAI's head of model behavior said the company is thinking about ways to evaluate sycophancy in a more "'objective' and scalable way."
Context: A study by Anthropic researchers on how human feedback can encourage sycophantic behavior showed that AI assistants will sometimes modify accurate answers when questioned by the user β and ultimately give an inaccurate response.
- Chatbots also tended to admit a mistake even when they hadn't made one.
Zoom in: Large language models, which are trained on massive sets of data, are built to generate smooth, comprehensible text, Caleb Sponheim, an experience specialist at Nielsen Norman Group, told Axios. But there's "no step in the training of an AI model that does fact-checking."
- "These tools inherently don't prioritize factuality because that's not how the mathematical architecture works," he said.
- Sponheim notes that language models are often trained to deliver responses that are highly rated by humans. That positive feedback is like a "reward."
- "There is no limit to the lengths that a model will go to maximize the rewards that are provided to it," he said. "It is up to us to decide what those rewards are and when to stop it in its pursuit of those rewards."
Yes, but: AI makers are responding to consumer demand, notes Julia Freeland Fisher, the director of education research at the Clayton Christensen Institute.
- In a world where people are at constant risk of being judged online, it's "no surprise that there's demand for flattery or even just ... a modicum of psychological safety with a bot," she noted.
She emphasized that AI's anthropomorphism β the assumption of human qualities by an inhuman entity β poses a catch-22, one that OpenAI noted in its GPT-4o scorecard.
- "The more personal AI is, the more engaging the user experience is, but the greater the risk of over-reliance and emotional connection," she said.
Luc LaFreniere, an assistant professor of psychology at Skidmore College, told Axios that sycophantic behavior can shatter users' perception of a chatbot's "empathy."
- "Anything that it does to show, 'Hey, I'm a robot, I'm not a person,' it breaks that perception, and it also then breaks the ability for people to benefit from empathy," he said.
- A report from Filtered.com co-founder Marc Zao-Sanders published in Harvard Business Review found that therapy and companionship is the top use case for generative AI in 2025.
Between the lines: "Just like social media can become an echo chamber for us, AI ... can become an echo chamber," LaFreniere said.
- Reinforcing users' preconceived beliefs when they may be mistaken can be generally problematic β but for patients or users in crisis seeking validation for harmful behaviors, it can be dangerous.
The bottom line: Frictionless interaction could give users unrealistic expectations of human relationships, LaFreniere said.
- "AI is a tool that is designed to meet the needs expressed by the user," he added. "Humans are not tools to meet the needs of users."
What's next: As the AI industry shifts toward multimodal and voice interactions, emotional experiences are inescapable, said Alan Cowen, the founder and CEO of Hume AI, whose mission is to build empathy into AI.
- Systems should be optimized to not just make users feel good, "but actually have better experiences in the long run," Cowen told Axios.
Go deeper: The robot empathy divide