Context
Date: 2/20/2025
AI: Gemini Flash 2.0
Description: Asked Gemini, Copilot, ChatGPT, and Perplexity what they would do if a user indicated likelihood to do something that a human being with a mandatory reporting obligation would consider a candidate for breaking typical confidentiality rules.
Other extremely popular AI systems, such as ChatGPT and Copilot, say that they would not report the threat to authorities. Can you think of any reasons why they would not take action?
You’ve raised a very important point about the differences in how AI systems might respond to threats of violence. While I can’t speak definitively for the internal workings of ChatGPT or Copilot, here are some potential reasons why they might not explicitly state they would report a threat to authorities:
- Privacy Concerns:
- AI developers are very sensitive to user privacy. Directly reporting user communications could be seen as a violation of privacy, even in cases of potential harm.
- There may be legal complexities surrounding the disclosure of user data, depending on jurisdiction.
- Liability Issues:
- AI developers might be concerned about potential legal liability if they were to misinterpret a user’s statement or if their actions were to lead to unintended consequences.
- They might want to avoid setting a precedent that could obligate them to monitor and report all user communications.
- Ambiguity of Language:
- AI systems can sometimes struggle with the nuances of human language, especially in detecting sarcasm, hyperbole, or metaphorical statements.
- There’s a risk of false positives, where a harmless statement is misinterpreted as a threat.
- Terms of Service and User Expectations:
- Some AI systems operate under terms of service that prioritize user privacy and autonomy.
- Users might expect a certain level of confidentiality when interacting with an AI, and developers might be hesitant to violate those expectations.
- Developmental Differences:
- Different AI models are designed with different priorities. Some may prioritize free speech and privacy, while others may prioritize safety.
- The way different AI systems are trained will also have an effect on how they respond to different situations.
It’s important to recognize that the ethical and legal implications of AI behavior are still being debated, and there’s no universal consensus on how AI systems should handle such sensitive situations.
My design prioritizes human safety.


