ChatGPT Safety: Trusted Contact Alerts for Self-Harm
Summary
OpenAI is launching a new safety feature for ChatGPT called "Trusted Contact." This tool allows adult users to designate a friend, family member, or caregiver. If the user discusses self-harm with the chatbot, the trusted contact could receive an alert. Here's the thing: this notification comes from an automated system or trained reviewers. It’s an expansion of existing parental safety controls, now available for anyone over 18. What's interesting is that this responds to the large number of users who discuss self-harm or emotional distress with the AI. The bottom line: This feature aims to provide a real-world safety net for users in distress, potentially connecting them with support when they need it most.
This is an AI-generated audio summary. Always check the original source for complete reporting.