
Indonesia and Malaysia Temporarily Block xAI’s Grok Over Sexual Deepfakes
Google has removed its AI Overviews feature from some medical-related search queries following reports that the tool provided misleading and potentially dangerous health information. The move comes after an investigation revealed that users were being shown incorrect advice on serious medical conditions, raising concerns among health experts and the public.
Earlier this month, a report highlighted multiple examples where Google’s AI-generated summaries gave false or harmful guidance. In one widely criticized case, the AI advised people with pancreatic cancer to avoid high-fat foods. Medical experts warned that this recommendation was the opposite of standard medical advice and could increase the risk of death for patients with the disease. In another instance, the AI provided incorrect information about liver function blood tests, which could lead individuals with severe liver disease to believe their condition was normal.
Following the report, users noticed that AI Overviews were no longer appearing for certain medical questions, including basic queries such as the normal range for liver blood tests. Google did not directly confirm the specific removals but acknowledged the concerns. A company spokesperson said Google invests heavily in improving the quality of AI Overviews, especially for sensitive topics like health. According to the company, internal reviews by clinicians found that many of the AI responses were supported by reliable sources, though Google admitted that some answers lacked proper context.
The spokesperson added that when issues are identified, Google works to make broader improvements and takes action under its policies when necessary. Despite this response, critics argue that the repeated errors show deeper problems with relying on AI-generated summaries for complex and high-risk topics such as medical advice.
This is not the first controversy surrounding Google’s AI Overviews. The feature has previously been criticized for producing bizarre and unsafe suggestions, including telling users to put glue on pizza or eat rocks. It has also faced multiple lawsuits related to inaccurate or misleading information.
The latest decision to pull AI Overviews from certain health searches suggests that Google is responding to mounting pressure to address safety risks. While AI tools continue to play a growing role in online search, this incident highlights the challenges of using automated systems to deliver accurate and responsible health information to the public.

Indonesia and Malaysia Temporarily Block xAI’s Grok Over Sexual Deepfakes

Anthropic Expands Claude AI With New Tools for Doctors and Patients

Google Introduces Universal Commerce Protocol to Power AI-Driven Shopping

Microsoft Launches Agentic AI Solutions to Support Retailers