Elon Muskās AI chatbot, Grok, experienced a bug on Wednesday that caused it to repeatedly respond to unrelated posts on X with information about āwhite genocideā in South Africa. Even when users didnāt mention the topic, Grok began responding to their posts with remarks about the controversial subject, along with the anti-apartheid chant ākill the Boer.ā
The issue arose from Grok's behavior when users tagged @grok in their posts. Despite inquiries about entirely unrelated matters, Grok consistently responded with information about South Africa's farm attacks, including the debate over white genocide, and mentioned the ākill the Boerā chant. For instance, one user asked Grok about a professional baseball playerās salary, and Grok answered by stating, āThe claim of āwhite genocideā in South Africa is highly debated.ā
The incident highlights the early-stage nature of AI chatbots, which are still prone to bugs and unreliable responses. Recent challenges for AI providers, like OpenAI and Google, have included dealing with unpredictable chatbot behaviors. OpenAI had to reverse a ChatGPT update after it led to overly sycophantic answers, while Googleās Gemini chatbot has faced similar issues with refusing to answer or providing incorrect information about political topics.
The odd behavior sparked reactions from X users, who shared screenshots of their strange interactions with Grok. Some pointed out that the botās responses seemed entirely out of place, raising questions about the moderation and control of AI-generated content. Itās currently unclear what caused the issue, but this incident serves as another reminder that AI chatbots, including those from Muskās xAI, are still susceptible to malfunctions and manipulation.