ChatGPT's New "Trusted Contact" Feature for Crisis Alerts
Summary
ChatGPT can now alert a trusted contact if it detects a user is in crisis. This new feature, called "Trusted Contact," marks a shift in how AI monitors our lives. Here's how it works: If ChatGPT's automated systems detect conversations that suggest a serious risk of self-harm, the user receives a warning. A specially trained human review team then assesses the situation. If they believe there's a genuine safety concern, the trusted contact receives a notification. This alert does not include chat transcripts to protect user privacy. Users can nominate a trusted adult contact who must accept the role before it's active. OpenAI developed this feature with input from mental-health experts and suicide-prevention specialists. It indicates that ChatGPT is becoming more than just a productivity tool; it's evolving into emotional infrastructure. This feature matters because it shows AI is taking on a more personal role in our well-being.
This is an AI-generated audio summary. Always check the original source for complete reporting.