ChatGPT Self-Harm Alerts: Trusted Contacts Notified
Summary
ChatGPT now offers a new safety feature allowing users to add "trusted contacts" to their profiles. These contacts will receive alerts if a user's chat suggests signs of self-harm. Here's the thing: OpenAI hopes this feature will connect people in crisis with someone they personally trust. The feature is available in ChatGPT's settings and requires both the user and their contact to opt in. Users provide contact details, and the contact receives an invitation explaining the process. What's interesting is that automated systems look for serious self-harm indicators. If found, these chats go to trained human reviewers. Alerts sent to trusted contacts are minimal; they do not include chat transcripts or direct quotes. Instead, they provide a short note indicating a serious safety concern and suggest checking in. OpenAI also provides guidance from mental health experts for contacts. Reviewers can approve an alert only for urgent safety concerns and if both user and contact are over 18. Often, ChatGPT first urges the user to reach out to their contact or local helplines. The bottom line is this new option aims to provide an additional layer of support for users facing serious challenges.
This is an AI-generated audio summary. Always check the original source for complete reporting.