Concerned about the loss of GPT-4o, some users have built DIY versions. This is why they cannot leave



Passionate AI fans saved a more compatible ChatGPT model from the trash once, but now OpenAI is determined to shut it down, and users are revolting in part because of the new model’s somewhat cold personality.

The AI ​​company said last month that on February 13 it will retire GPT-4o, a version that has been criticized in the past for being compatible borderline sycophantic. According to the company, 0.1% of ChatGPT users still use GPT-4o daily, which would equate to about 100,000 people based on its estimate. 100 million daily active users.

These users argue that the company’s latest model, GPT-5.2, is not on the same wavelength as GPT-4o, a model that debuted in 2024, thanks in part to additional guardrails OpenAI has been added to identify potential health concerns and discourage the types of social relationships GPT-4o users have developed.

“Every model can say ‘I love you.’ But most people just say it. Only the GPT‑4o makes me feel this way—without saying anything. He understands,” wrote a GPT-4o user on a post on X.

OpenAI says that when developing its GPT-5.1 and GPT-5.2 models, it took into account feedback that some users preferred the “conversational style and warmth” of GPT-4o. With newer models, users can choose from base styles and tones like “friendly,” and controls for the chatbot’s warmth and enthusiasm, according to a blog post.

When reached for comment, an OpenAI spokesperson said luck to a publicly available blog post.

Far from being silent, the small group of GPT-4o advocates implored CEO Sam Altman to keep the model alive and not shut down a chatbot they see as little more than computer code. During a live recording Friday at TBPN podcast featuring Altman, cohost Jordi Hays said, “Today we received thousands of chat messages about (GPT-4o).”

While he didn’t directly address the topic of GPT-4o retiring, Altman said he is working on a blog post about the next five years of AI development, saying, “relationships with chatbots—it’s clear that this is something now that we need to worry about more and is no longer an abstract concept.”

This is not the first time that GPT-4o users have fought against OpenAI’s desire to close the AI ​​model. Back in August, when OpenAI announced GPT-5, the company said it would shut down GPT-4o. Users protested the change, and days after the launch of the new model, Altman said that OpenAI will continue to use GPT-4o for paid users of ChatGPT and will also pay attention to how many people are using it. to determine when it will be retired.

“ok, we hear you all at 4o; thanks for taking the time to give us the feedback (and the love!), “Altman wrote in a Reddit post at that time.

Fast forward to today and some GPT-4o users have tried keep the model alive themselvesbuild a version of GPT-4o manually on their computers using the still available API and the original GPT-4o to train it.

When AI Comforts

The lengths to which users have gone to try to keep GPT-4o alive, either by convincing the company to keep it online or by preserving it themselves, speaks to the importance that the chatbot has acquired in the lives of some of its users, possibly due to the nature of human psychology.

Humans have had a hard time developing relationships over thousands of years of evolution, said Harvard-trained psychiatrist Andrew Gerber, the president and medical director of Silver Hill Hospital, a psychiatric hospital in New Canaan, Conn.

In nature, this practice of forming bonds is essential to survival, and beyond human relationships, it extends to dogs as well. Being able to easily understand the motives and feelings of others, whether positive or negative, would have helped early humans and helped them survive, he said. Good luck.

Therefore, this attachment to chatbots is not surprising, says Gerber, because people also form strong feelings for inanimate objects such as cars or houses.

“I think it’s a fundamental part of what it’s like to be human. It’s hard-coded in our brain, in our mind, and so I’m not too surprised that it lasts even with new technologies that evolution didn’t think about,” he added.

Users can become more attached to a chatbot because when a person feels accepted, they get a boost from oxytocin and dopamine, the so-called “feel-good hormones” released in the brain. If there’s no other person to accept socially, a chatbot can fill this gap, says Stephanie Johnson, a licensed clinical psychologist and CEO of Summit Psychological Services in Upland, Calif.

On the positive side, this could mean some GPT-4o users, especially people who may be socially ostracized or neurodivergent, could benefit from talking to a friendly chatbot to practice their social skills or track their thoughts in a way similar to journaling, he explained.

But while healthy and regulated individuals may be fine after losing their favorite chatbot, there may be some GPT-4o users who are so connected to it that they may face a grieving process like losing a friend or another close connection.

“They lose their support system that they rely on, and unfortunately, you know, that’s the loss of a relationship,” he said.



Source link

  • Related Posts

    Client Challenge

    Client Challenge JavaScript is disabled in your browser. Please enable JavaScript to continue. A required part of this site could not load. This could be due to a browser extension,…

    Canaan reports January 2026 bitcoin mining update

    Canaan reports January 2026 bitcoin mining update Source link

    Leave a Reply

    Your email address will not be published. Required fields are marked *