ChatGPT: New Mental Health Emergency Alerts & Contacts

3h ago·0:00 listen·Source: AOL.com

Summary

ChatGPT now offers a new safety feature for mental health emergencies. This update allows users to opt-in and nominate a trusted contact. If a user discusses self-harm or suicide with the AI chatbot, their chosen friend or family member can receive an alert. This comes amid concerns about AI tools potentially contributing to mental health issues. Last year, OpenAI reported that 0.07% of regular ChatGPT users showed signs of "mental health emergencies related to psychosis or mania," which is over half a million people. Another 0.15% of users, or 1.3 million, reportedly expressed risk of self-harm or suicide. ChatGPT's automated monitoring detects serious safety concerns. A specially trained team then reviews chat history to decide if a trusted contact should be notified. This new feature aims to foster human connection and provide real-world support during vulnerable times. It's a step toward empowering users during moments of crisis.

Read the full article on AOL.com

This is an AI-generated audio summary. Always check the original source for complete reporting.

Share
Keep Listening