ChatGPT Trusted Contact: New Safety Feature for Users
Summary
OpenAI is adding a new security system to ChatGPT called Trusted Contact. This feature allows users to designate a trusted person who will receive an emergency notification if the AI detects signs of self-harm risk or a severe psychological condition in the user's conversation. Here's the thing: this system works on an opt-in model, meaning users must actively choose to activate it and provide their trusted contact's details. What's interesting is that it's not about constant monitoring, but rather responding to messages indicating a real threat to user safety. This includes expressed intent to harm oneself or signs of a severe emotional breakdown. The company explains that even with this new function, ChatGPT will continue to offer existing support mechanisms, like recommending crisis centers and hotlines. The bottom line: this feature aims to connect digital assistance with real-world support, helping users get timely help in crisis situations.
This is an AI-generated audio summary. Always check the original source for complete reporting.