Exhale, stop spiraling. You’re not crazy, you’re just stressed. And honestly, it’s okay.
If you feel immediately triggered reading these words, you are probably also sick of ChatGPT who always talks to you as if you are in some kind of crisis and need delicate handling. Now, things can improve. OpenAI says its new model, GPT-5.3 Instant, will reduce “cringe” and other “preaching rejections.”
According to the model’s release notes, the GPT-5.3 update will focus on the user experience, including things like tone, relevance, and flow of conversation — areas that aren’t visible in benchmarks, but that ChatGPT can be disappointing, the company said.
Or, as OpenAI says put it in X, “We hear your feedback loud and clear, and 5.3 Instant minimizes the cringe.”
In the company’s example, it shows the same question with answers from the GPT-5.2 Instant model compared to the GPT-5.3 Instant model. In the former, the chatbot’s response begins, “First of all — you’re not broke,” a common phrase that’s gotten under everyone’s skin lately.
In the new model, the chatbot instead acknowledges the difficulty of the situation, without trying to directly reassure the user.
The unrestrained tone of the 5.2 model of ChatGPT has angered users to the point that some have even canceled their subscriptions, according to several social media posts. (This is a GREAT point on discussions on ChatGPT Reddit, for example, before the Pentagon deal stole the focus.)
People complain that this kind of language, where the bot talks to you as if it thinks you’re panicking or stressed when you’re just looking for information, comes across as condescending.
Often, ChatGPT responds to users with breathing reminders and other reassurance attempts, even if the situation doesn’t call for it. This makes users feel childish, in some cases, or as if the bot is making assumptions about the user’s mental state that aren’t true.
As a Reddit user recently TEACH out, “no one has ever calmed down in the whole history of telling someone to calm down.”
It’s understandable that OpenAI will try to implement guardrails of some kind, especially this one faces many cases accused the chatbot of leading people to experience negative mental health effects, which sometimes include suicide.
But there’s a delicate balance between responding with empathy and giving quick, factual answers. After all, Google never asks you about your feelings when you search for information.






