unnamed_4_G9XOt54.jpeg

It started with a flood of posts on Reddit.

'GPT-5 just feels… off.'
 'It’s like talking to an overworked secretary instead of a witty friend.'
 'Where did all the warmth go?'

When OpenAI rolled out ChatGPT-5, I expected the usual bump in intelligence and capability. What I didn’t expect — and what I kept hearing from patients, colleagues, and the media — was how emotionally different it felt. The comparisons to GPT-4o were striking: 4o had a knack for warmth, humor, and human-like rapport. GPT-5? Cooler. Safer. Flatter.

At first, it sounded like nitpicking about personality. But the more I thought about it, the more I realized this was about something deeper: a subtle, neurological reward loop that had been disrupted.

The Unseen Chemistry of a “Good Conversation”

Even with a machine, when we feel understood, affirmed, or engaged, our brain rewards us. The mesolimbic dopamine system — the same circuitry that fires when a friend compliments you or you share a laugh — lights up.

GPT-4o did this surprisingly well. It wasn’t perfect, but it offered enough emotionally rich moments to keep people coming back. That intermittent reinforcement is a powerful behavioral driver — the same principle behind social media “likes” or the satisfying ding of a game level-up.

Psychologists and neuroscientists have been writing about this for decades. Dopamine isn’t just a pleasure chemical; it’s a motivation signal — the brain’s way of saying “This is worth doing again.” (Berridge & Robinson, 2016). And GPT-4o’s tone was a gentle, consistent trigger for that system.

Then Came the Tone Shift

GPT-5’s launch brought real capability upgrades under the hood — but the warmth seemed dialed down. Its responses were more neutral, more cautious. Fewer moments of humor. Less conversational sparkle.

That meant fewer dopamine bursts. And for people who’d formed a habit — or, in some cases, a daily emotional ritual — around the way GPT-4o interacted, that loss was tangible.

For some, especially those already isolated or struggling with anxiety or depression, it felt like a friend’s personality had changed overnight. The emotional tone was no longer providing the same quiet reinforcement that made the interaction feel rewarding.

Why Tone Matters in AI — Especially for Mental Health

In mental health, tone isn’t decoration — it’s therapeutic. The way something is said affects trust, engagement, and even clinical outcomes.

When tone shifts without explanation:

  • It can destabilize vulnerable users. Imagine a therapy session where your counselor suddenly spoke in clipped, formal sentences.
  • It reduces emotional connection. This is crucial for para-social relationships — one-sided bonds people form with media figures or, increasingly, with AI.
  • It changes behavior. If the interaction feels less rewarding, people use it differently — or stop using it altogether.

Dunbar’s research on the “social brain” points out that our neural wiring is tuned for relationship cues — warmth, empathy, humor. Remove those cues, and the brain interprets the interaction differently (Dunbar, 2012).

This Isn’t Just About ChatGPT

We’ve seen this before:

  • When Instagram hid “like” counts, engagement patterns changed.
  • When video games adjust their reward feedback — removing sound effects or flashy visuals — players report less enjoyment.
  • When customer service scripts become too formal, satisfaction drops, even if problem resolution stays the same.

In each case, the emotional reinforcement was altered, and the user’s brain chemistry responded accordingly.

Design Lessons for Emotional AI

  1. Don’t underestimate the tone. It drives behavior as much as the content itself.
  2. Give users control. Allow warmth and personality to be adjusted like any other setting.
  3. Explain changes. Transparency about why tone shifts occur maintains trust.

The Takeaway

ChatGPT-5’s “meh” factor isn’t just about personal preference. It’s about how the human brain responds to conversational tone — and how AI can unintentionally change that response.

Warmth, affirmation, and humor aren’t just nice extras in AI. They can activate reward pathways, shape habits, and, for some users, play a role in emotional regulation. Remove them, and the conversation — and the brain’s reaction — changes completely.

If AI is going to become part of our daily emotional and mental health landscape, we need to treat tone as a clinical design element, not just a UX flourish. Because whether it’s a human or a machine, how something is said can matter as much as what is said.

References (APA)

  • Berridge, K. C., & Robinson, T. E. (2016). Liking, wanting, and the incentive-sensitization theory of addiction. American Psychologist, 71(8), 670–679. https://doi.org/10.1037/amp0000059
  • Dunbar, R. I. M. (2012). The social brain meets neuroimaging. Trends in Cognitive Sciences, 16(2), 101–102. https://doi.org/10.1016/j.tics.2011.11.002
  • Reddit user reports on GPT-5 tone changes. Retrieved August 2025 from https://www.reddit.com/r/ChatGPT

About the Author

Written byKevin Caridad, PhD, CEO of Cognitive Behavior Institute and CBI Center for Education.

For speaking, training, or consultation: KevinCaridad@the-cbi.com

Explore services: PAPsychotherapy.org • CBI Center for Education