Loading the Elevenlabs Text to Speech AudioNative Player…
As OpenAI reports, content submitted by ChatGPT users that is deemed dangerous is sent to human reviewers who can decide to notify law enforcement. As the company emphasizes in its blog post, the procedure only applies to “direct threats to life”. However, it’s unclear how the location of users is determined or what specific content might trigger such an intervention.
Cited by futurism.com, critics point out that this practice is at odds with previous statements by OpenAI CEO Sam Altman, who promised “therapist-level privacy”. Doubts also arise about the risk of abuses, like impersonating others to trigger false interventions.