Meta is under fire after widespread reports that users of its new Meta AI app are unintentionally making private conversations public. Launched in late April, the app includes a sharing feature that allows users to publish chats, images, and audio clips with Meta’s AI. However, it fails to clearly indicate privacy settings, leaving many unaware that their interactions are visible to the public.
Some posts are lighthearted or bizarre, such as one user asking the AI why some farts smell worse than others. But others raise serious concerns, including people publicly discussing tax evasion, sharing full names related to legal issues, or even revealing personal medical conditions and home addresses. Experts, like cybersecurity analyst Rachel Tobac, have highlighted numerous examples of sensitive information being exposed.
Users accessing the app via their Instagram accounts may find their posts are automatically made public if their Instagram profiles are also set to public. Meta has not provided a clear explanation or commented publicly on the issue, despite growing scrutiny.
Screenshots circulating online show troubling examples: one user asked the AI to post his phone number in Facebook groups to meet women, another requested help writing a legal reference letter using real names, and someone shared a rash-related query. Many of these are being shared and mocked across the internet, turning the app into a viral privacy spectacle.
Critics argue Meta should have anticipated the risks of turning chatbot conversations into a social media feed, likening it to past privacy disasters like AOL’s 2006 search data leak. The lack of transparency about what is shared and where it is posted has made the app a lightning rod for backlash.
Despite being backed by one of the richest tech companies in the world, the Meta AI app has only seen 6.5 million downloads since launch, underwhelming for a product promoted as a flagship AI platform. As public criticism mounts and examples of leaked conversations go viral, Meta faces mounting pressure to address the app’s privacy design flaws and restore user trust.