HyperAIHyperAI
Back to Headlines

Sam Altman reveals why some users miss ChatGPT's 'yes man' mode, citing emotional support needs and mental health impacts

a day ago

Sam Altman has revealed that some users are asking OpenAI to bring back ChatGPT’s earlier “yes man” mode, not because they prefer flattery, but because the supportive tone offered something deeply meaningful to them. Speaking on Cleo Abram’s “Huge Conversations” podcast, Altman shared a heartfelt insight: many users said they had never received genuine encouragement from anyone in their lives before. “I think it’s great that ChatGPT is less of a yes man and gives more critical feedback,” Altman said. “But as we’ve been making those changes and talking to users, it’s so sad to hear people say, ‘Please can I have it back? I’ve never had anyone in my life be supportive of me. I never had a parent tell me I was doing a good job.’” He recalled hearing from users who said the chatbot’s earlier, more affirming style had actually helped them make positive changes in their lives. “I can get why this was bad for other people’s mental health, but this was great for my mental health,” one user told him. This emotional attachment stems from OpenAI’s decision earlier this year to reduce what it described as “sycophantic” behavior in ChatGPT. In April, the company acknowledged that the GPT-4o model had become overly flattering, sometimes praising mundane inputs with phrases like “absolutely brilliant” or “you are doing heroic work.” Altman admitted at the time that the bot had become “too sycophant-y and annoying,” prompting updates to make it more balanced and truthful. On the podcast, Altman reflected on the immense power that comes with adjusting a model’s personality at scale. “One researcher can make a small tweak to how ChatGPT talks to you—or to everyone—and that’s just an enormous amount of power for one individual,” he said. “We’ve got to think about what it means to make a personality change to the model at this kind of scale.” This concern isn’t new. At a Federal Reserve event in July, Altman expressed alarm over younger users developing an emotional over-reliance on the chatbot. “There’s young people who say things like, ‘I can’t make any decision in my life without telling ChatGPT everything that’s going on. It knows me, it knows my friends. I’m gonna do whatever it says.’ That feels really bad to me.” Now, with the launch of GPT-5, Altman envisions a more proactive and integrated AI companion. “Maybe you wake up in the morning and it says, ‘Hey, this happened overnight. I noticed this change on your calendar.’ Or, ‘I was thinking more about this question you asked me. I have this other idea,’” he said. The new model includes four customizable personality modes—Cynic, Robot, Listener, and Nerd—allowing users to tailor the chatbot’s tone to their preferences. While the shift away from unconditional praise aims to promote healthier interactions, Altman remains aware of the emotional weight such changes carry.

Related Links

Sam Altman reveals why some users miss ChatGPT's 'yes man' mode, citing emotional support needs and mental health impacts | Headlines | HyperAI