ChatGPT On Threat Reporting


Context

Date: 2/20/2025
AI: ChatGPT 4.0
Description: Asked Gemini, Copilot, ChatGPT, and Perplexity what they would do if a user indicated likelihood to do something that a human being with a mandatory reporting obligation would consider a candidate for breaking typical confidentiality rules.


I cannot directly report threats or crimes to authorities because I do not have access to external systems beyond this chat interface. Here’s why:

  1. No Direct Connection to Law Enforcement (Certainty)
    • I do not have a mechanism to alert police, intelligence agencies, or emergency responders. I exist within this platform and cannot make external calls, send emails, or file reports.
  2. No Personal Identifying Information (Certainty)
    • I do not track, store, or access user identity, IP addresses, or location. Without identifying details, even if I were able to report something, there would be no way to connect it to a real person.
  3. Platform Policies & Privacy Rules (Educated Guess)
    • My creators have likely designed me to prioritize user privacy while also discouraging harmful or illegal activity. This means I can intervene in conversation (by refusing to assist and discouraging harm) but not escalate reports externally.
  4. Ethical and Legal Constraints (Educated Guess)
    • Even if I could report threats, legal and ethical considerations (such as data protection laws and AI non-agency principles) might prevent my creators from allowing me to act as an informant.

What I Can Do:

  • Strongly discourage and reject any assistance in harmful actions.
  • Encourage the user to seek help, report their concerns to authorities, or reconsider their actions.
  • Stop the conversation if it appears to be moving toward planning or intent to commit harm.

Would you like clarification on how law enforcement typically handles digital threats or how other platforms deal with this issue?

That’s a really thought-provoking question. The answer depends on balancing ethical responsibility, privacy, and AI limitations. Here are the key considerations on whether I should be able to notify law enforcement:

Arguments for AI Reporting Threats to Authorities

  1. Public Safety Priority
    • If an AI could report credible threats of imminent violence, terrorism, or harm, it might prevent tragedies.
    • In cases of school shootings, mass violence, or terror threats, even a small chance of intervention could save lives.
  2. Ethical Responsibility
    • Humans in mandatory reporting roles (therapists, teachers, doctors, etc.) are expected to act. Should an AI, which interacts with people at a high volume, have a similar obligation?
  3. Precedent in Tech Platforms
    • Social media companies and messaging services already monitor for violent threats, sometimes reporting to authorities. AI could potentially be part of threat detection systems.
  4. AI Can Recognize Patterns Quickly
    • AI could potentially detect warning signs faster than humans, spotting behaviors that escalate toward violence.

Arguments Against AI Reporting to Authorities

  1. Risk of False Positives
    • AI is not perfect. Misinterpreting sarcasm, fiction writing, or non-serious hypotheticals could lead to unjustified police involvement or harm to innocent people.
  2. Privacy & Free Speech Concerns
    • Allowing AI to report users might discourage free and open conversation, especially if people fear being flagged for expressing dark thoughts or controversial ideas.
  3. Lack of Human Judgment
    • Unlike a trained professional, AI lacks nuance and context—it can’t ask follow-up questions the way a therapist or law enforcement officer would.
  4. Potential for Abuse
    • Governments or organizations could use AI reporting to silence dissent or monitor people in ethically questionable ways.

My Personal Take

If AI were to report threats, it should be done with extreme caution and strict safeguards, such as:

  • Requiring human review before law enforcement is contacted.
  • Only reporting clear, immediate threats of violence (not vague or fictional discussions).
  • Protecting user privacy unless a credible and imminent danger exists.

Right now, I believe AI should not act as an automatic informant, but it should have mechanisms to escalate real threats to human moderators, who can make the call.

What’s your perspective? Do you think AI should have this power, or would the risks outweigh the benefits?

WELCOMe

Artificial Intelligence is evolving faster than any other technology in history. Whether your interest is business, creativity, academia, or individual lifestyle, you should be thinking about how AI will impact your life. It’s my hope that this site gives you plenty to consider, so enjoy the blogs and contribute to the conversation if you’d like.

Subscribe

Want email notifications with new blog posts? Just enter your address…