馃敀 COP-GPT? OpenAI might report users to the police

OpenAI has acknowledged that conversations held in ChatGPT are monitored and in some cases may be shared with the police. The information, published in a blog post, sparked a wave of outrage among users and experts.

As OpenAI reports, content submitted by ChatGPT users that is deemed dangerous is sent to human reviewers who can decide to notify law enforcement. As the company emphasizes in its blog post, the procedure only applies to “direct threats to life”. However, it’s unclear how the location of users is determined or what specific content might trigger such an intervention.

Cited by futurism.com, critics point out that this practice is at odds with previous statements by OpenAI CEO Sam Altman, who promised “therapist-level privacy”. Doubts also arise about the risk of abuses, like impersonating others to trigger false interventions.

Ten artyku艂 jest cz臋艣ci膮 p艂atnej edycji hAI Magazine. Aby go przeczyta膰 w ca艂o艣ci, wykup dost臋p on-line

25 z艂 miesi臋cznie

Miesi膮c za darmo dla nowych czytelnik贸w

Aktywuj
Wykup dost臋p

Share

You might be interested in