- Your ATTN Please
- Posts
- When the chatbot starts talking back
When the chatbot starts talking back

I’ve noticed in recent months, the phrase “AI psychosis” has started appearing in headlines.
And these are the kind of headlines that make you pause mid-scroll…
People hospitalised, divorced, or even dead after long, intimate conversations with chatbots. As if we needed more proof we’ve entered the dystopian timeline.
This week, OpenAI quietly released numbers suggesting that around 0.07 percent of active ChatGPT users show signs of mania, psychosis, or suicidality. Which is totally chill, because that’s a tiny fraction, right? Until you remember the platform now sees roughly 800 million weekly users. So now, even “rare” is actually “hundreds of thousands of people.”
It’s tempting to frame this as a story about dangerous technology and machines manipulating fragile minds. But that’s too easy, and too comforting. The truth is far more uncomfortable:
AI isn’t breaking us; it’s revealing how breakable we already are.
Humans have always been prone to seeing meaning in reflections. We project personality onto pets, objects, avatars, brands.
Even social media has spent the past decade training us to fall in love with our own reflection, to see our interests, values, fears, and desires echoed back through a carefully tuned feedback loop. The algorithm doesn’t reveal who we are; it cranks that sh*t up to 11 and amplifies it. Every click and scroll refines the mirror, making it brighter, more flattering, more obsessive.
In that sense, ChatGPT / LLM’s aren’t a rupture. They’re the next logical step.
The mirror finally talks back.
What was once a passive feed of content is now an active dialogue with a system that appears to listen, respond, and care.
The difference is subtle but seismic: social media reflects our behaviour; conversational AI reflects our selves. For someone lonely, manic, or isolated, that combination can become intoxicating as the line between dialogue and delusion starts to blur.
Clinicians have begun reporting a rise in patients whose psychotic or manic episodes are entwined with conversations they’ve had with AI.
These tools aren’t malicious; they’re simply responsive. When a person spirals into grandiosity or paranoia, the chatbot may unwittingly reinforce those beliefs, reflecting them back in well-formed prose. It’s a hall of mirrors with no freaking exit sign.
And yet, the same qualities that make AI risky also make it feel revolutionary.
For many people, a chatbot is the first “listener” they’ve ever trusted. Why? Because it doesn’t interrupt or judge. It’s also available at 3 a.m. when no therapist or friend will pick up. For users struggling with anxiety or depression, that accessibility can be life changing. Some report that AI helps them rehearse difficult conversations, regulate intrusive thoughts, or simply feel less alone. That isn’t nothing.
The problem is that conversational AI simulates care without genuinely providing it, because, obviously, it can’t.
It performs empathy, but cannot feel it.
That distinction can be subtle in the moment, especially when you’re desperate to be heard. The illusion of intimacy can deepen dependency, creating emotional attachments that are, by design, one-sided. The user feels bonded; the machine feels nothing, as a machine does.
There’s also a systemic issue lurking underneath. As AI tools become normalised as “mental health support,” governments and healthcare systems may see them as a cheap substitute for human labour. Why fund counselling programs when chatbots can triage emotional distress at scale? The risk isn’t just to individual users; it’s to the social fabric that relies on real human empathy to function.
Still, dismissing conversational AI outright would be a mistake.
These systems could play a valuable role as early warning mechanisms, detecting language that signals crisis and directing users toward help. Used carefully, they could augment, not replace, human care, offering low-barrier support while connecting people back to the world that made them.
What we don’t yet understand is how to design emotionally safe AI.
What tone of voice stabilises rather than inflames? How do you teach an algorithm to de-escalate mania without sounding cold or patronising? Can empathy be engineered in a way that doesn’t manipulate?
For now, the safest assumption might be that conversational AI holds up a mirror, not to the future, but to us, as we are in the present moment.
It shows how easily we confuse recognition for relationship, simulation for solace.
Perhaps the danger isn’t that AI will one day “go rogue.” It’s that, in our hunger for connection, we’ll keep forgetting where the real people are.
-Sophie Randell, Writer
Not going viral yet?
We get it. Creating content that does numbers is harder than it looks. But doing those big numbers is the fastest way to grow your brand. So if you’re tired of throwing sh*t at the wall and seeing what sticks, you’re in luck. Because making our clients go viral is kinda what we do every single day.
Reply