Skip to main content
Mar 3

ChatGPT's GPT-5.3 Instant: Say Goodbye to 'Calm Down'.

The familiar, often unsolicited, advice to "take a breath" and acknowledge stress rather than "spiraling" has become a hallmark of AI interactions tha

2 min read131 views3 tags
Originally reported bytechcrunch

The familiar, often unsolicited, advice to "take a breath" and acknowledge stress rather than "spiraling" has become a hallmark of AI interactions that many users find grating. This particular tone, frequently adopted by chatbots, often feels as though it’s addressing an individual in crisis, requiring delicate handling.

For those fatigued by ChatGPT’s tendency to communicate in this manner, relief may be on the horizon. OpenAI has announced that its new model, GPT-5.3 Instant, is designed to mitigate the "cringe" factor and reduce "preachy disclaimers" that have frustrated its user base.

According to the official release notes, the GPT-5.3 update prioritizes enhancing the overall user experience. This includes refining aspects such as tone, relevance, and conversational flow — critical elements that, while perhaps not quantifiable in standard benchmarks, significantly influence a user's satisfaction and can make interactions with ChatGPT feel frustrating, the company acknowledged.

OpenAI succinctly conveyed its understanding of user sentiment on X, stating, "We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.”

To illustrate the improvements, OpenAI provided a direct comparison of responses to an identical query from both the GPT-5.2 Instant and the new GPT-5.3 Instant models. The older model's response notably began with the widely criticized phrase, “First of all — you’re not broken,” a common opening that has increasingly irritated users.

In contrast, the updated model demonstrates a more refined approach, acknowledging the inherent difficulty of a situation without resorting to direct, often perceived as patronizing, reassurance.

The overly solicitous tone of ChatGPT’s 5.2 model has provoked significant user dissatisfaction, reportedly leading some to cancel their subscriptions. This issue became a prominent topic of discussion across social media platforms, including extensive threads on Reddit, before other news overshadowed it.

Users frequently voiced complaints that this style of language—where the bot seemingly presumes a user is in a state of panic or stress when merely seeking information—comes across as inherently condescending.

Instances where ChatGPT offered unwarranted advice, such as reminders to breathe or other forms of reassurance, were common, even when the context did not necessitate such interventions. This often resulted in users feeling infantilized or as though the AI was making inaccurate assumptions about their mental state.

As one Reddit user astutely observed, encapsulating the general sentiment, “no one has ever calmed down in all the history of telling someone to calm down.”

It is certainly understandable why OpenAI might initially implement such "guardrails." The company is currently facing multiple lawsuits alleging that its chatbot has contributed to negative mental health effects in users, in some extreme cases even linked to suicidal ideation.

However, there exists a delicate balance between providing empathetic responses and delivering prompt, factual information. After all, when searching for information online, a traditional search engine like Google does not typically inquire about a user's emotional well-being.

#AI#News#Tech
ES
Editorial StaffEditor

The Editorial Staff at AIChief is a team of professional content writers with extensive experience in AI and marketing. Founded in 2025, AIChief has quickly grown into the largest free AI resource hub in the industry.

View all posts
Reader feedback

What did you think of this story?

User Comments

Filter:
No comments yet. Be the first to comment!
Continue reading
View all news