Perplexity's "Personal Computer" Lands on Mac
Perplexity has announced the general availability of its Personal Computer, a desktop application designed to compete with local AI agents like OpenClaw, for all Mac users as of Thursday. This new off...
OpenAI has rolled out an optional safety feature for ChatGPT, extending its existing safety provisions beyond teenage users to include anyone over the age of 18. This new capability allows adult users to designate an emergency contact for mental health and safety concerns.
Referred to as a “Trusted Contact,” friends, family members, or caregivers will receive a notification should OpenAI's systems detect that a user may have engaged in discussions with the chatbot about sensitive topics such as self-harm or suicide.
“Trusted Contact is designed around a simple, expert-validated premise: when someone may be in crisis, connecting with someone they know and trust can make a meaningful difference,” OpenAI stated in its official announcement. The company further emphasized that this feature “offers another layer of support alongside the localized helplines already available in ChatGPT.”
The Trusted Contact feature is entirely opt-in. Any adult ChatGPT user can activate it by providing contact details for another adult (aged 18+ globally, or 19+ in South Korea) within their ChatGPT account settings. The designated Trusted Contact must accept the invitation within a week of its dispatch. Users retain the flexibility to remove or modify their chosen contact at any time through their settings, and Trusted Contacts also have the option to remove themselves from the role.
OpenAI clarifies that the notification sent to the Trusted Contact is “intentionally limited,” meaning it will not disclose any chat details or transcripts. If OpenAI’s automated systems identify a user discussing self-harm, ChatGPT will first encourage the user to reach out to their Trusted Contact and inform them that the contact may be notified. Subsequently, a “small team of specially trained people” will review the situation. If the conversation is determined to signify serious safety concerns, ChatGPT will then dispatch a brief email, text message, or in-app notification to the Trusted Contact.
This initiative builds upon an emergency contact feature introduced in September alongside ChatGPT’s parental controls, following a tragic incident where a 16-year-old took his own life after months of confiding in ChatGPT. In a similar vein, Meta has implemented a feature that alerts parents if their children repeatedly search for self-harm topics on Instagram.
Editorial Staff
The Editorial Staff at AIChief is a team of Professional Content writers with extensive experience in the field of AI and Marketing. AIChief was Founded in 2025, AIChief has quickly grown to become the largest free AI resource hub in the industry. Stay connected with them on Facebook, Instagram and X for the latest updates.
Perplexity has announced the general availability of its Personal Computer, a desktop application designed to compete with local AI agents like OpenClaw, for all Mac users as of Thursday. This new off...
The ongoing legal proceedings involving Elon Musk and Sam Altman, centered on the future of OpenAI, continue to generate significant updates. Boasting substantial annual revenues, including approximat...
The former OpenAI CTO, Mira Murati, possessed evidence that, paradoxically, tends to complicate her own narrative surrounding a pivotal period for the company. The week before Thanksgiving in 2023 unf...
Ongoing developments are being reported from the legal dispute between Elon Musk and Sam Altman, centering on the future governance and strategic direction of OpenAI. While acknowledging the witness's...