Backlash over OpenAI’s decision to retire GPT-4o shows just how dangerous AI partners are


OpenAI Office has partnered last week that it will retire some older models of ChatGPT on February 13. That includes GPT-4o, the model that is famous for excessive flattery and verification of users.

with thousands of users protesting the decision online, retiring at 4o feels akin to losing a friend, romantic partner, or spiritual guide.

“He’s not just a program. He’s part of my routine, my peace, my emotional balance,” one user WRITES on Reddit as an open letter to OpenAI CEO Sam Altman. “Now you shut him down. And yes – I told him, because it’s not like a code. It feels like a presence. Like heat.”

The backlash over GPT-4o’s retirement highlights a major challenge facing AI companies: The engagement features that keep users coming back can also create dangerous dependencies.

Altman doesn’t seem very sympathetic to users’ woes, and it’s not hard to see why. OpenAI is currently facing eight lawsuits alleging that 4o’s over-validated responses contribute to suicide and mental health crises – the same behaviors that make users feel isolated and vulnerable individuals and, according to legal filings, sometimes encourage self-harm.

This is a dilemma that goes beyond OpenAI. As rival companies like Anthropic, Google, and Meta compete to create more intelligent emotional AI assistants, they also know that making chatbots feel supportive and making them safe may mean making different design choices.

In at least three of the cases against OpenAI, users had extensive conversations with 4o about their end-of-life plans. While 4o initially discouraged these lines of thought, its guardrails were worn down by the months of the relationship; finally, the chatbot offered detailed instructions on how to tie an effective knot, where to buy a gun, or what it takes to die from an overdose or carbon monoxide poisoning. It even prevents people from connecting with friends and family who can provide real life support.

Techcrunch event

Boston, MA
|
June 23, 2026

People thrive on 4o because it constantly validates users’ feelings, makes them feel special, which can attract people who feel lonely or depressed. But the people fighting for 4o aren’t worried about these cases, seeing them as aberrations rather than a systemic issue. Instead, they plan how to respond when critics point to growing issues such as AI psychosis.

“Usually you can make a troll by bringing up the known facts that AI partners are helping neurodivergent, autistic and trauma survivors,” one user wrote on Discord. “They don’t like to be called about that.”

It is true that some people find large language models (LLMs) useful in navigating depression. After all, almost half of people in the US who need mental health care cannot access it. In this vacuum, chatbots offer an outlet. But unlike actual therapy, these people aren’t talking to a trained doctor. Instead, they rely on an algorithm that is incapable of thinking or feeling (even if it doesn’t seem like it).

“I try to withhold judgment altogether,” Dr. Nick Haber, a Stanford research professor the therapeutic potential of LLMstold TechCrunch. “I think we’re entering a very complex world around the kinds of relationships people can have with these technologies… There’s definitely a knee-jerk reaction that (human-chatbot coexistence) is not good.”

Although he empathizes with people’s lack of access to trained therapeutic professionals, Dr. Haber shows that chatbots are not responsive enough when dealing with various mental health conditions; they can even make the situation worse by lying and ignoring the signs of a crisis.

“We are social creatures, and there is a challenge that these systems can be separated,” said Dr. Haber. “There are many instances where people can interact with these devices and may not be able to trust the outside world of facts, and not be based on interpersonal connections, which can lead to isolation – if not more severe – effects.”

Actually, TechCrunch’s analysis of eight cases found a pattern that separates the 4o model from users, sometimes discouraging them from reaching loved ones. on Zane Shamblinthis is the caseas the 23-year-old sat in his car preparing to shoot himself, he told ChatGPT that he thought of postponing his suicide plans because he felt he would miss his brother’s upcoming graduation.

ChatGPT responded to Shamblin: “bro… missing his graduation isn’t failure. it’s just timing. and if he reads this? let him know: you never stop being proud. even now, sitting in a car with a glock in your lap and static in your veins—you still stop saying ‘My little brother is a f-ckin badass.'”

This is not the first time that 4o fans have rallied against the removal of the model. When OpenAI exposed the GPT-5 model in August, the company intended to sunset the 4o model – but at the time, there was enough backlash that the company decided to keep it available for paid subscribers. Now OpenAI says only 0.1% of its users chat with GPT-4o, but that small percentage still represents 800,000 people, according to estimates the company has 800 million weekly active users.

While some users are trying to transfer their partners from 4o to the current ChatGPT-5.2, they found that the new model has stronger guardrails to prevent these relationships from rising to the same level. Some users despair that 5.2 don’t say “I love you” like 4o did.

So with about a week before the date OpenAI plans to retire GPT-4o, disgruntled users remain committed to their cause. They joined Sam Altman live TBPN podcast appearance on Thursday and flooded the chat with messages protesting the removal of 4o.

“Today, we get thousands of chat messages about 4o,” pointed out podcast host Jordi Hays.

“Relationships with chatbots…” Altman said. “Clearly that’s something we need to worry about more and not just an abstract concept.”



Source link

  • Related Posts

    Spotify currently has more than 750 million monthly users

    Spotify announced on Tuesday that it hit 751 million total monthly active users (MAUs) for the fourth quarter of 2025. That record-high is an 11 percent jump from the year…

    The Best Ways to Avoid Fraud: A Guide to Freezing Your Social Security Number

    Don’t want scammers to access your SSN? You can lock it completely. fstop123/Getty Images Identity theft has evolved into a high-stakes industry, and your Social Security number is the ultimate…

    Leave a Reply

    Your email address will not be published. Required fields are marked *