Photo Credit: Jakub Porzycki/NurPhoto via Getty Images

OpenAI has responded to user backlash by updating its latest and most advanced AI model yet, GPT-5, to make it “friendlier and warmer” after complaints of the chatbot being cold and distant towards users. 

This comes just weeks after the launch of GPT-5 that excited ChatGPT’s users due to the promises it had. But this excitement was immediately doused with a sharp backlash from users who found that the new model was too formal and corporate, a complete opposite to the former model GPT-4o. 

Strong user backlash over cold AI responses

GPT-5 was launched with significant technical improvements, with CEO Sam Altman saying the new model could offer “PhD-level expertise” in fields like writing, coding and healthcare. As such, it came with better reasoning capabilities, fewer factual errors and hallucinations, as well as what the company called “more honest responses.” 

However, right after the model’s launch, users quickly noticed a change in personality and tone in which the AI-powered chatbot used in responding to prompts. Compared to its predecessor GPT-4o, which was known for its warm and overly agreeable tone that was sometimes described as “sycophancy,” GPT-5 initially responded with brevity and formality.  

The transition, from users’ perspectives, resulted in one of the loudest backlash the AI giant has ever received. Online communities, especially on Reddit, complained loudly about the AI feeling like what many users described as a “corporate beige zombie,” which means a polite but cold and attached chatbot. 

The backlash was further intensified by OpenAI’s initial decision to remove access to older models, forcing users to transition immediately to GPT-5 without alternatives. Only left with GPT-5 and seeing that there was a stark difference in responses, it meant that many users had already formed some sort of an emotional attachment to the former model GPT-4o. 

Emotional attachments to GPT-4o fuel the outcry

The severe and sudden reactions from users meant many things, but a common factor in the reactions was the emotional depth these users had already developed towards AI in general, most especially with GPT-4o. These users treated it as a daily support tool, conversational partner, and a form of therapy. 

Altman admitted and acknowledged that the emotional attachment users formed was more intense and different from typical technology relationships. “If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models,” he said in an X post a few days after the rollout of GPT-5. 

“It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly deprecating old models that users depended on in their workflows was a mistake),” he continued.

This highlights how AI has quickly become so interwoven into the daily lives and activities of users, by offering advice and other human emotional needs. And it explains why some users described feelings akin to grief when the AI model they have grown to love and listen to for comfort was suddenly replaced. Perhaps it felt like something was snatched from them, something they thought was here to stay.

OpenAI’s response and what it means for the AI industry

OpenAI moved quickly after the backlash, with Altman responding, “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!). we are going to bring it back for plus users, and will watch usage to determine how long to support it” to a Reddit user who had begged to “bring back 4o.”

Within days, OpenAI restored access to the GPT-4o model for paid users. “We’re making GPT-5 warmer and friendlier based on feedback that it felt too formal before. Changes are subtle, but ChatGPT should feel more approachable now,” the company posted on X. 

The company also made updates to GPT-5’s tone and said the model will begin acknowledging users’ questions more warmly with phrases like “Good questions” and “Great start,” while using a conversational style that wasn’t too flattery. “You’ll notice small, genuine touches like “Good question” or “Great start,” not flattery. Internal tests show no rise in sycophancy compared to the previous GPT-5 personality,” the company said. 

Conversely, GPT-5’s initial coldness or what the company called “more honest responses” was part of an effort to reduce a problem that was identified with 4o – its excessive sycophancy. 4o was too agreeable, often affirming incorrect and harmful ideas to users. As such, GPT-5 came with strict boundaries. 

“People have used technology including AI in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that,” Altman said. “Most users can keep a clear line between reality and fiction or role-play, but a small percentage cannot. We value user freedom as a core principle, but we also feel responsible in how we introduce new technology with new risks.”

How OpenAI further decides to strike a balance with giving users what they want but also ensuring it stays within ethical lines remains to be seen.

Share.

I’m Precious Amusat, Phronews’ Content Writer. I conduct in-depth research and write on the latest developments in the tech industry, including trends in big tech, startups, cybersecurity, artificial intelligence and their global impacts. When I’m off the clock, you’ll find me cheering on women’s footy, curled up with a romance novel, or binge-watching crime thrillers.

Comments are closed.

Exit mobile version