Friday, March 6, 2026

ChatGPT’s novel GPT-5.3 Instant model will no longer tell you to serene down

Share

If you immediately felt triggered after reading these words, you’re probably fed up with ChatGPT constantly talking to you like you’re in some kind of crisis and need the gentle treatment. Now the situation may improve. OpenAI says its novel model, GPT-5.3 Instant, will reduce “fringing” and other “Catholic objections.”

According to the model’s release notes, the GPT-5.3 update will focus on user experience, including things like tone, relevancy and conversation flow – areas that may not show up in benchmarks but can cause frustration with ChatGPT, the company says.

Or, like OpenAI put it on X, “We heard your feedback loud and clear, and 5.3 Instant reduces that fear.”

The company example showed the same query with responses from the GPT-5.2 Instant model compared to the GPT-5.3 Instant model. In the first case, the chatbot’s response begins with the words “First of all – you are not spoiled” – this is a popular phrase that has been bothering everyone lately.

In the updated model, the chatbot instead acknowledges the difficulty of the situation without directly trying to reassure the user.

As evidenced by numerous social media posts, the obnoxious tone of ChatGPT 5.2 irritated users to the point that some even unsubscribed. (This was AND huge point With discussion on ChatGPT Reddit, for example, before the focus was on the Pentagon deal.)

People have complained that this kind of language, in which the bot talks to you as if it assumes you’re panicking or stressed when you’re looking for information, is condescending.

Often, ChatGPT would respond to users with reminders to breathe and other attempts to serene down, even when the situation didn’t warrant it. In some cases, this made users feel infantile or as if the bot was making assumptions about the user’s mental state that simply weren’t true.

As one of the Reddit users recently pointed he stated that “no one has ever calmed down in the entire history of telling someone to calm down.”

It’s understandable that OpenAI will try to implement some sort of guardrails, especially in this case faces numerous lawsuits accusing the chatbot of causing negative mental health effects in people, which sometimes included suicide.

However, there is a exquisite balance between responding with empathy and providing quick, factual answers. After all, Google never asks you about your feelings when you search for information.

Latest Posts

More News