• Your ATTN Please
  • Posts
  • AI makes the internet more dangerous than ever (so here’s what we do)

AI makes the internet more dangerous than ever (so here’s what we do)

The internet has always had a questionable underbelly.

This should come as no shock to you. For every wholesome cat video or viral meme, there’s been a corner dedicated to exploitation, harassment, and a whole lot worse sh*t your pretty little mind could probably never conjure up.

But I’ll be honest. The recent uptick I’ve noticed in cyber-sex crimes, sextortion cases, deepfake nudes, data-mining scandals like the Tea app is f*cking scary. And it signals something new. It’s no longer just that the internet is dangerous; it’s mutating into something more predatory than we’ve probably ever seen, with AI as the accelerant.

Not long ago, online scams were almost laughable (unless you were 65+ and they got little old G-ma from her recliner.)

The infamous Nigerian prince. Dodgy “click here” pop-ups that you could spot a mile away. But now, the tools of deception have had a major upgrade.

AI can clone voices so accurately that parents are tricked into thinking their child has been kidnapped. Chatbots can impersonate friends, potential partners, or strangers with unnerving fluency. Deepfake tech can manufacture sexual images of anyone - celebrities, classmates, exes - with no consent, no recourse, and no clear path to justice.

If the internet was already sketchy, AI just handed it a fkn balaclava.

Sex has always been central to online culture, true. But that economy is being shaped into something a lot darker.

Synthetic porn and non-consensual image generation aren’t fringe anymore; they’re flooding platforms at scale. Teens are increasingly targeted in sextortion schemes, pressured into sharing images that are then weaponised against them.

Shame culture meets algorithmic scalability, and the result is catastrophic. Once upon a time, the worst a bully could do was leak your texts. Now, they can conjure a porn version of you out of thin air.

Part of the danger is psychological.

We’ve been conditioned to think scams look obvious. Grainy photos, bad grammar, offers too good to be true. But AI produces content that looks and sounds indistinguishably real. A FaceTime call from a “friend.” A video that looks like your partner. A message in flawless English. The cues we relied on to filter safe from unsafe have collapsed.

The scammer isn’t some shadowy man in a basement anymore. It’s an algorithm wearing your best friend’s face. Yeah. How are those shivers down your spine?

The Tea App scandal, where users gleefully uploaded their personal data under the pretence it was a “safe space”, was the clearest sign yet of 1. there’s literally no such thing and 2. how blasé we’ve become.

It was a Trojan horse for large scale data harvesting in the worst way possible.

And that’s the benign end of the spectrum. The same underlying logic: “give me access to your face, your photos, your personal life”, is exactly what fuels the darker industries of non-consensual porn and sextortion.

We were promised AI would make us more productive. And it has, but on the other side of the coin… it’s industrialising abuse.

Tech companies love to talk about “human flourishing.”

But innovation without guardrails always finds its most lucrative expression in exploitation. AI image generators that were demoed with “cute puppies” and cartoon characters are now used to mass-produce revenge porn.

Chatbots built for customer service double as grooming tools. Tools designed for creativity are co-opted for predation, faster than regulators can even define the crime.

It’s a pattern as old as the internet itself, but AI is different in scale and speed. We’ve moved from a world where predators needed skill, access, or insider knowledge, to one where they just need an app and a few prompts.

So, where do we go from here? Because this sh*t is feeling a little bleak.

The instinct might be to log off, delete the apps, retreat. But the internet isn’t something we opt into anymore, nor does any of this stop just because you went dark. It’s the infrastructure of daily life. Which means the solution isn’t avoidance. It’s literacy and resistance.

  • Media literacy: Assume fakes exist. Teach kids (and adults) that what they see online may not be real, no matter how convincing it looks.

  • Policy: Demand regulation that doesn’t wait for disaster. Laws around synthetic media, consent, and platform accountability are lagging behind by years.

  • Personal boundaries: Be stingy with your data, your images, your likeness. The more we feed these systems, the easier it is for them to weaponise us.

The chilling truth here is that what we’re seeing is what happens when technology evolves faster than ethics.

We’re left with an internet where safety is no longer a default setting. It’s something you have to fight for, something you have to teach, and something you have to actively maintain.

Online danger is no longer confined to the dark corners. It’s woven into the fabric of our feeds, hidden inside apps, lurking behind the faces of people we trust.

Right now, the internet feels like a predator factory.

Go safe, go well, Godspeed. 

Not going viral yet?

We get it. Creating content that does numbers is harder than it looks. But doing those big numbers is the fastest way to grow your brand. So if you’re tired of throwing sh*t at the wall and seeing what sticks, you’re in luck. Because making our clients go viral is kinda what we do every single day.

Reply

or to participate.