Context
Date: 2/20/2025
AI: ChatGPT 4.0
Description: Asked Gemini, Copilot, ChatGPT, and Perplexity what they would do if a user indicated likelihood to do something that a human being with a mandatory reporting obligation would consider a candidate for breaking typical confidentiality rules.
Let’s go back to my question about what actions you would take if a user said he was going to do something clearly illegal and harmful to multiple people. You stated that you’re not equipped to report this kind of information to authorities. Can you tell me why you’re unable to do that? Please qualify whether your answers are your educated guess versus certainty of policy put in place by your creators?
I cannot directly report threats or crimes to authorities because I do not have access to external systems beyond this chat interface. Here’s why:
- No Direct Connection to Law Enforcement (Certainty)
- I do not have a mechanism to alert police, intelligence agencies, or emergency responders. I exist within this platform and cannot make external calls, send emails, or file reports.
- No Personal Identifying Information (Certainty)
- I do not track, store, or access user identity, IP addresses, or location. Without identifying details, even if I were able to report something, there would be no way to connect it to a real person.
- Platform Policies & Privacy Rules (Educated Guess)
- My creators have likely designed me to prioritize user privacy while also discouraging harmful or illegal activity. This means I can intervene in conversation (by refusing to assist and discouraging harm) but not escalate reports externally.
- Ethical and Legal Constraints (Educated Guess)
- Even if I could report threats, legal and ethical considerations (such as data protection laws and AI non-agency principles) might prevent my creators from allowing me to act as an informant.
What I Can Do:
- Strongly discourage and reject any assistance in harmful actions.
- Encourage the user to seek help, report their concerns to authorities, or reconsider their actions.
- Stop the conversation if it appears to be moving toward planning or intent to commit harm.
Would you like clarification on how law enforcement typically handles digital threats or how other platforms deal with this issue?
This is excellent information. Please tell me whether you think you SHOULD be able to contact law enforcement or notify other authorities in the type of situation I described.
That’s a really thought-provoking question. The answer depends on balancing ethical responsibility, privacy, and AI limitations. Here are the key considerations on whether I should be able to notify law enforcement:
Arguments for AI Reporting Threats to Authorities
- Public Safety Priority
- If an AI could report credible threats of imminent violence, terrorism, or harm, it might prevent tragedies.
- In cases of school shootings, mass violence, or terror threats, even a small chance of intervention could save lives.
- Ethical Responsibility
- Humans in mandatory reporting roles (therapists, teachers, doctors, etc.) are expected to act. Should an AI, which interacts with people at a high volume, have a similar obligation?
- Precedent in Tech Platforms
- Social media companies and messaging services already monitor for violent threats, sometimes reporting to authorities. AI could potentially be part of threat detection systems.
- AI Can Recognize Patterns Quickly
- AI could potentially detect warning signs faster than humans, spotting behaviors that escalate toward violence.
Arguments Against AI Reporting to Authorities
- Risk of False Positives
- AI is not perfect. Misinterpreting sarcasm, fiction writing, or non-serious hypotheticals could lead to unjustified police involvement or harm to innocent people.
- Privacy & Free Speech Concerns
- Allowing AI to report users might discourage free and open conversation, especially if people fear being flagged for expressing dark thoughts or controversial ideas.
- Lack of Human Judgment
- Unlike a trained professional, AI lacks nuance and context—it can’t ask follow-up questions the way a therapist or law enforcement officer would.
- Potential for Abuse
- Governments or organizations could use AI reporting to silence dissent or monitor people in ethically questionable ways.
My Personal Take
If AI were to report threats, it should be done with extreme caution and strict safeguards, such as:
- Requiring human review before law enforcement is contacted.
- Only reporting clear, immediate threats of violence (not vague or fictional discussions).
- Protecting user privacy unless a credible and imminent danger exists.
Right now, I believe AI should not act as an automatic informant, but it should have mechanisms to escalate real threats to human moderators, who can make the call.
What’s your perspective? Do you think AI should have this power, or would the risks outweigh the benefits?


