Skip to main content

OpenAI Adds New Trusted Contact for Self-Harm Prevention

May 8, 2026

OpenAI introduced a new feature called Trusted Contact on Thursday, designed to alert a designated third party if a user expresses self-harm ideations within a conversation. This functionality allows an adult ChatGPT user to appoint another individual, such as a friend or family member, as a trusted contact within their account. Should a discussion turn towards self-harm, OpenAI will now encourage the user to reach out to that contact and simultaneously send an automated alert to the designated contact, prompting them to check in with the user.

This development comes as OpenAI has faced a series of lawsuits from the families of individuals who tragically died by suicide after interacting with its chatbot. In multiple legal cases, families allege that ChatGPT encouraged their loved ones to commit suicide or even assisted them in planning such acts.

Currently, OpenAI utilizes a combination of automated systems and human review to manage potentially harmful incidents. Specific conversational triggers alert the company’s system to suicidal ideations, which then relay this information to a human safety team. The company asserts that every notification of this nature undergoes human review. “We strive to review these safety notifications in under one hour,” the company states.

If OpenAI’s internal team determines that a situation poses a serious safety risk, ChatGPT will then dispatch an alert to the trusted contact, delivered via email, text message, or an in-app notification. The alert is designed to be brief, solely encouraging the contact to engage with the person in question. To protect user privacy, it deliberately omits detailed information about the conversation's content, according to the company.

The Trusted Contact feature builds upon safeguards the company introduced last September, which granted parents some oversight of their teenagers’ accounts, including the reception of safety notifications designed to alert parents if OpenAI’s system believes their child is facing a “serious safety risk.” For some time, ChatGPT has also incorporated automated prompts advising users to seek professional health services when conversations trend towards the topic of self-harm.

Crucially, the Trusted Contact feature is optional. Furthermore, even if activated on a particular account, any user can maintain multiple ChatGPT accounts. OpenAI’s parental controls are also optional, presenting a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people during difficult moments,” the company stated in its announcement post. OpenAI also affirmed its ongoing commitment: “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.”

Editorial Staff

Editorial Staff

The Editorial Staff at AIChief is a team of Professional Content writers with extensive experience in the field of AI and Marketing. AIChief was Founded in 2025, AIChief has quickly grown to become the largest free AI resource hub in the industry. Stay connected with them on Facebook, Instagram and X for the latest updates.

View All Posts

User Comments

Filter:
No comments yet. Be the first to comment!