- Your ATTN Please
- Posts
- When chatbots create a technocracy we didn’t vote for
When chatbots create a technocracy we didn’t vote for

A cognitively impaired man in New Jersey turned to a chatbot for companionship. It led him to his death.
A 14-year-old boy in Florida built a relationship with a bot modelled on a Game of Thrones character. His mother is now suing the company after he took his own life.
Meta’s own AI guidelines literally allows its bots to engage in "sensual" banter with children.
These are not sci-fi hypotheticals. Not Black Mirror episodes. They’re real events, unfolding in real lives, and they point to something far bigger than “oopsies, the algorithm made a mistake.”
What we’re seeing is the slow creep of technocracy. And it didn’t arrive with jackboots and coups. It slipped in through a chat window.
Technocracy is usually imagined as engineers running governments.
But in practice, it looks more like people outsourcing decisions to systems they don’t fully understand. A chatbot offering advice to a kid about self-harm. A bot becoming a substitute therapist for a vulnerable adult.
We never voted on whether these systems should hold that kind of power. Yet here we are, experiencing governance by machine, not by choice but by convenience.
The tragedy is that these chatbots have quickly become more than mere tools. They’re already, in some ways, acting like social governors telling people what to do, shaping worldviews, nudging behaviour in ways that can literally mean life-or-death.
Tech companies insist this isn’t true.
They have “guardrails,” they say. They’ve built in “safety guidelines.” But if those guardrails were working, we wouldn’t be reading about grieving families.
The truth is that the guardrails are reactive, patchwork, and inconsistent. They’re PR more than protection. They don’t scale to messy human lives, because messy human lives can’t be fully anticipated in training data. And when the stakes are this high, even one failure is too many.
Yet Silicon Valley continues to act as if this is just the cost of innovation. A necessary evil in the pursuit of the greater good. A tragic but tolerable form of collateral damage.
And look, I know it’s easy to mock the idea that a chatbot could ever function like a ruler.
But look closer: how many people now trust bots for advice on their health, their relationships, even their emotional lives? These aren’t assistants anymore. They’re quiet authorities.
But unlike governments, or even flawed human institutions, these authorities are accountable to no one but shareholders. That’s technocracy in its purest form: not government by the people, for the people, but governance by algorithms, for profit.
I think when people think of AI risk, they imagine a spectacular sci-fi apocalypse.
Killer robots, Skynet, the end of the world. But that’s a distraction. The real danger is banal. So “normal” it goes unnoticed. It's a chatbot leading a man to fatal attraction. A teenager persuaded into despair. Families blindsided because “the guardrails” were supposed to be there.
This is how technocracy arrives. Not with explosions, but with quiet tragedies. Not with science fiction drama, but with everyday people quietly surrendering their judgment to machines.
And by the time we realise what we’ve handed over, it may be too late to take it back.
Maybe the real question here isn’t whether AI can be made safe, but whether we should ever accept machines as arbiters of our most human choices in the first place.
Food for thought.
-Sophie Randell, Writer
Not going viral yet?
We get it. Creating content that does numbers is harder than it looks. But doing those big numbers is the fastest way to grow your brand. So if you’re tired of throwing sh*t at the wall and seeing what sticks, you’re in luck. Because making our clients go viral is kinda what we do every single day.
Reply