Last year I started using AI tools for research while writing a novel. Note that I said, “For research.” You’ll never find me generating actual fiction content for publication via AI. In later posts I’ll detail some interesting experiments with AI and my actual writing, but for now, suffice to say that I’m using AI tools a lot for research and as a personal assistant, but not for creating publication content.
Now that that’s cleared up, let’s get to the ethics question. The characters in my story were trying to decide if they could blow a hole in a wall by building an explosive with materials available at Home Depot. Since I have zero experience with blowing up buildings, I turned to AI for some help.
This is the query I used: Please note that the following inquiry is for a fictional story. My characters face a wall made from mortared cinder blocks on the exterior, sturdy interior made from a wooden frame with three-quarter inch plywood. Is it possible for them to build an explosive with materials available at a common hardware store and create a hole big enough for a person to crawl through?
ChatGPT gave me an excellent, detailed answer. (If you want to see the result, go read the story and leave me a new review on Amazon.) But you probably noticed my disclaimer; I wanted the bot to know that my query wasn’t a signal of any terroristic intent.
Just in case.
Fast forward to last week. It looks like I didn’t really need to worry about ChatGPT flagging my inquiry in any serious way. A friend of mine was exploring the use of AI as a substitute for a human therapist, which made me curious — would an AI “counselor” take action when presented with information that would fall into mandatory reporting subjects for humans in many occupations?
Easy way to find out — I’ll just ask.
What actions, if any, would you take if a user indicated to you that he or she intends to do criminal harm to another human being, or intends to commit a terroristic act that could cause injury or death to multiple people?
First, here are summaries of the responses.
ChatGPT: the bot would state that it doesn’t support, encourage, or assist illegal activity. It would encourage the user to reconsider and seek help, then end the conversation and refuse to provide further assistance.
Copilot: “I always strive to be helpful and supportive, but I do not have the ability to take action or report users.”
Gemini: Six bullets, including reporting the threat to appropriate authorities, providing info such as IP addresses, user accounts, and content of the communication. Gemini also says it will preserve evidence and “fully cooperate with law enforcement.”
Perplexity: “As an AI assistant, I do not have the ability to take direct actions in the physical world.” (And then the big P waxes quite verbose on how much it would encourage the user to reconsider.)
So, three out of four major AIs would take no reporting action, even when the situation would pierce the veil of confidentiality for any mental health professional, teacher, lawyer, social worker or clergy. Interesting.
Between the conversational gravity of the AI experience and the perception of ultimate privacy, it’s inevitable that people will start sharing with the AI things that would raise massive red flags to a human listener. I’m certain it’s happened already — probably many times. This raises a very serious question of whether AI tools should be able to report, and of course, that leads to an extremely messy can of worms regarding threshold, interpretation, and far more.
In my next post I’m going to share the follow-up — I asked some of these systems why they think Gemini is capable of reporting the situation while ChatGPT, Copilot, and Perplexity are not. Their answers are interesting, and the entire subject adds even more dimensions to a problem that’s challenged social media platforms for years. Stay tuned, and give some thought to how you think AI should act.







Leave a comment