You are connecting from Lake Geneva Public Library, please login or register to take advantage of your institution's Ground News Plan.
Published 4 days ago • loading... • Updated 1 day ago
ChatGPT's New Safety Feature Could Alert 'Trusted Contact' to Risk of Self-Harm
The opt-in feature notifies a chosen contact after automated systems and human reviewers find serious safety concerns, without sharing chat transcripts.
On Thursday, OpenAI launched "Trusted Contact," an optional safety feature allowing adult ChatGPT users to designate a friend or family member for alerts if the system detects discussions about self-harm or suicide.
The company introduced this capability amid intense legal and public pressure, following reports of users who died by suicide after engaging with ChatGPT, with families alleging the chatbot failed to respond appropriately to distress.
When automated systems flag serious safety concerns, a "small team of specially trained people" reviews the situation before ChatGPT sends alerts via email, text, or in-app notification to the designated contact.
Notifications are "intentionally limited," sharing no chat transcripts or specific details with the contact to protect user privacy while encouraging the designated person to check in and reach out.
Dr. Arthur Evans, chief executive officer of the American Psychological Association, stated that "Helping people identify a trusted person in advance, while preserving their choice and autonomy, can make it easier to reach out to real-world support when it matters most.
OpenAI launches a "trusted contact" option in ChatGPT to alert a loved one when a user presents distress signals, especially in case of self-harm or suicide ideas detected by the systems.