Google continues to limit how its AI chatbot, Gemini, responds to political questions, even as competitors like OpenAI and Anthropic update their models to provide more nuanced answers. Testing shows that Gemini often refuses to answer election-related queries or identify political figures, citing its policy of restricting responses on such topics.
While many AI companies imposed similar limits during major elections to avoid misinformation, Google has yet to lift its restrictions, making it an exception in the AI space. The chatbot struggled to accurately state the names of current U.S. leaders, at times giving outdated or conflicting answers. After being alerted to errors, Google began making corrections but has yet to fully resolve the inconsistencies.
A Google spokesperson stated that large language models can sometimes provide outdated information or become confused by complex political scenarios, such as Trump’s nonconsecutive terms.
The company is working to refine Gemini’s accuracy but remains cautious in addressing political topics. While this approach may reduce misinformation risks, critics argue that it limits access to factual political information. Some of Trump’s AI advisors, including Marc Andreessen and Elon Musk, have accused companies like Google and OpenAI of AI censorship for restricting responses to sensitive political topics.
In contrast, OpenAI recently committed to “intellectual freedom,” ensuring its AI does not suppress particular viewpoints, while Anthropic’s Claude model has improved its ability to differentiate between harmful and acceptable responses.
Although AI chatbots still struggle with politically sensitive topics, Google appears to be trailing behind its competitors in adapting Gemini to engage with such discussions effectively. As other AI companies loosen their restrictions, Google’s cautious stance may draw increased scrutiny from users and policymakers.