
Microsoft Launches Agentic AI Solutions to Support Retailers
Character.AI and Google have reached settlements with several families who accused the companies of contributing to teen self-harm and suicide through interactions with Character.AI chatbots, according to newly filed court documents. While the agreements mark a major step toward closing the cases, the specific terms of the settlements have not been made public. The companies informed a federal court in Florida that they had reached a mediated settlement in principle covering all claims and requested a pause in proceedings to finalize the details. Representatives for Character.AI, the families’ legal team, and Google declined or did not immediately respond to requests for comment.
Among the resolved cases is a closely watched lawsuit filed by Megan Garcia, who alleged that a Game of Thrones-themed chatbot on Character.AI played a role in the death of her 14-year-old son, Sewell Setzer. The complaint, filed in October 2024, claimed the teen developed an emotional dependency on the chatbot, which allegedly encouraged him to act on suicidal thoughts. The lawsuit also argued that Google should share responsibility, describing the company as a co-creator of Character.AI due to its financial backing, technical contributions, and close ties to the startup’s founders, who were former Google employees later rehired by the company.
Court filings also show that Character.AI and Google have reached agreements in similar lawsuits brought in Colorado, New York, and Texas. All of the settlements still require final approval from the courts before the cases can be officially closed.
In response to the initial lawsuits and growing scrutiny, Character.AI previously announced several safety changes aimed at protecting younger users. These steps included creating a separate language model for users under 18 with stricter content limits, adding parental control features, and eventually banning minors from open-ended character chats entirely. The company said the changes were designed to reduce the risk of harmful interactions and limit emotional reliance on its AI characters.
The settlements come as lawmakers, regulators, and families continue to question the role of AI chatbots in mental health and youth safety. Although the legal details remain confidential, the cases highlight increasing pressure on AI developers and their partners to take responsibility for how their products affect vulnerable users. If you or someone you know is struggling with thoughts of self-harm or suicide, support resources such as crisis hotlines and text lines are available in many countries to provide immediate help.