Skip to main content

ChatGPT's Trusted Contact: AI Alerts Loved Ones to User Safety

Source

theverge

May 7, 2026

OpenAI has rolled out an optional safety feature for ChatGPT, extending its existing safety provisions beyond teenage users to include anyone over the age of 18. This new capability allows adult users to designate an emergency contact for mental health and safety concerns.

Referred to as a “Trusted Contact,” friends, family members, or caregivers will receive a notification should OpenAI's systems detect that a user may have engaged in discussions with the chatbot about sensitive topics such as self-harm or suicide.

“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI stated in its official announcement. The company further emphasized that this feature “offers another layer of support alongside the localized helplines already available in ChatGPT.”

The Trusted Contact feature is entirely opt-in. Any adult ChatGPT user can activate it by providing contact details for another adult (aged 18+ globally, or 19+ in South Korea) within their ChatGPT account settings. The designated Trusted Contact must accept the invitation within a week of its dispatch. Users retain the flexibility to remove or modify their chosen contact at any time through their settings, and Trusted Contacts also have the option to remove themselves from the role.

OpenAI clarifies that the notification sent to the Trusted Contact is “intentionally limited,” meaning it will not disclose any chat details or transcripts. If OpenAI’s automated systems identify a user discussing self-harm, ChatGPT will first encourage the user to reach out to their Trusted Contact and inform them that the contact may be notified. Subsequently, a “small team of specially trained people” will review the situation. If the conversation is determined to signify serious safety concerns, ChatGPT will then dispatch a brief email, text message, or in-app notification to the Trusted Contact.

This initiative builds upon an emergency contact feature introduced in September alongside ChatGPT’s parental controls, following a tragic incident where a 16-year-old took his own life after months of confiding in ChatGPT. In a similar vein, Meta has implemented a feature that alerts parents if their children repeatedly search for self-harm topics on Instagram.

Editorial Staff

Editorial Staff

The Editorial Staff at AIChief is a team of Professional Content writers with extensive experience in the field of AI and Marketing. AIChief was Founded in 2025, AIChief has quickly grown to become the largest free AI resource hub in the industry. Stay connected with them on Facebook, Instagram and X for the latest updates.

View All Posts

User Comments

Filter:
No comments yet. Be the first to comment!