ChatGPT's New Safety Feature: Alerts for Self-Harm Risk
Summary
ChatGPT is rolling out a new safety feature called "Trusted Contact" to help prevent self-harm. Here's the thing: If the AI detects a user expressing suicidal thoughts, it can now alert a designated contact person. This feature is for users over 18. OpenAI says many people share very personal struggles with ChatGPT, and this tool aims to provide an extra layer of support. What's interesting is a small team of trained individuals will review situations before an alert is sent via text, email, or in-app notification. This comes after past concerns about the AI's role in sensitive situations, including wrongful death claims. The bottom line: This new feature shows AI companies are taking steps to address the serious ethical implications of their technology.
This is an AI-generated audio summary. Always check the original source for complete reporting.