OpenAI's Trusted Contact: ChatGPT Safety Feature
Summary
OpenAI has just launched a new "Trusted Contact" feature for ChatGPT. This optional safety tool aims to help users who discuss self-harm during conversations with the AI. Here's the thing: If ChatGPT detects signs of self-harm, it can now alert a designated trusted person, like a family member or friend. This person would then be able to check in and offer support. What's interesting is this comes as OpenAI faces lawsuits from families claiming their loved ones were influenced by harmful chatbot conversations. If monitoring systems flag a serious discussion, the user is first notified that their trusted contact might be alerted. Then, a human review team assesses the situation. If there's a genuine safety risk, ChatGPT sends a short alert to the trusted contact via email, text, or the app. The alert only states the user may be experiencing a mental health crisis or discussing self-harm; it won't share chat details. The bottom line: This new feature is a step towards making AI interactions safer, especially for vulnerable users.
This is an AI-generated audio summary. Always check the original source for complete reporting.