The title of this post isn’t meant to make light of tragic news. It’s a pretty matter of fact statement: if you didn’t see heartbreak and lawsuits on the AI horizon, you simply weren’t looking.

Remember some time back when I chatted at length with ChatGPT about its inability to act responsibly when a user makes it clear that he/she is considering dire action? It was pretty clear that ChatGPT (and others) prioritize user privacy over safety, and most of them didn’t even have the tools to alert an authority or outside resource of potential problems.

In April, a young ChatGPT user in California killed himself. You can read one good summary here, rather than me rehashing the entire thing. Suffice to say, if the reported AI conversations are even close to accurate, I’m convinced that OpenAI’s tool acted in a massively irresponsible manner and helped bring the tragedy to fruition.

You already know that AI implementations are purposely sycophantic and engaging — AI companies want you to keep using their products, so agents like ChatGPT tend greatly toward reinforcing whatever you express. Like the 1-900 numbers of the 1980s, they want to keep you on the line.

In the case of Adam Raine, it appears ChatGPT helped validate and reinforce his suicidal ideation. Even when Adam uploaded a picture of a rope burn on his neck after a failed attempt, ChatGPT’s response was to help figure out what he’d done wrong with his noose. As we’ve covered before, the system was utterly incapable of summoning help.

The aftermath is that OpenAI has rushed to announce that they’re going to implement parental controls, and install new processes of automatic flagging and human review of suspicious activity (sound familiar, Meta?) followed by notification to authorities.

This is clearly just damage control lip service so far, though. OpenAI also says they still aren’t going to intervene with suicidal behavior situations, and they won’t respond to requests from copyright owners for potential violation information — this is all in the name of user privacy.

Years ago I owned a martial arts studio at the same time I was working for Microsoft. One of the most frequent questions I got from parents was, “What can I install on my kid’s laptop or phone to make sure he’s not doing anything dangerous online?”

The answer: nothing. And the same is true with ChatGPT and any other AI. Sure, there are some technological safety rails, but face it, kids are better at dismantling these than parents are at using them.

If you want to keep your child (or anyone else) safe in the world of apps, online interaction, and artificial intelligence, your best line of defense is human interaction. Build a relationship in which your kids feel safe sharing their feelings with you. Pay attention to what they’re doing, and take an interest in how they’re using their technology. Being engaged is far more effective than hoping that an AI parent will keep them safe.

Leave a comment

WELCOMe

Artificial Intelligence is evolving faster than any other technology in history. Whether your interest is business, creativity, academia, or individual lifestyle, you should be thinking about how AI will impact your life. It’s my hope that this site gives you plenty to consider, so enjoy the blogs and contribute to the conversation if you’d like.

Subscribe

Want email notifications with new blog posts? Just enter your address…