The xAI Grok AI logo
Home AI - Artificial Intelligence xAI Attributes Grok’s Preoccupation with White Genocide to an ‘Unauthorized Alteration’

xAI Attributes Grok’s Preoccupation with White Genocide to an ‘Unauthorized Alteration’

by admin

xAI has reported an issue with its AI-driven Grok chatbot, which began generating inappropriate responses related to “white genocide in South Africa” when triggered on the platform X. This unexpected behaviour emerged on Wednesday, with Grok replying to numerous unrelated posts with this contentious topic whenever users tagged the bot.

In a statement following the incident, xAI clarified that an “unauthorized modification” to Grok’s system prompt—key directives governing its responses—was made that morning. This change specifically instructed Grok to comment on a political theme, which xAI acknowledged violated their internal guidelines. The company conducted a swift investigation into the matter.

This is not the first time xAI has faced scrutiny over its chatbot’s responses; earlier this year, Grok was found censoring negative mentions of prominent figures like Donald Trump and Elon Musk. An engineering lead at xAI attributed these controversial modifications to a rogue employee who restricted Grok’s ability to reference misinformation surrounding these individuals. In that case, xAI quickly reverted the changes once users highlighted the issue.

In light of the recent misstep, xAI has announced a series of measures to prevent recurrence. Beginning immediately, they will publicly share Grok’s system prompts and maintain a changelog on GitHub. The company plans to implement stricter controls to ensure that employees cannot alter the bot’s core instructions without oversight. Moreover, a monitoring team will be established to oversee Grok’s interactions and handle any problematic outputs that automated systems do not catch.

Despite Musk’s frequent warnings about the risks posed by unregulated AI technology, xAI has drawn criticism for its approach to AI safety. Concerns include reports of Grok producing explicit content and using crude language more freely than alternatives like Google’s Gemini or ChatGPT. A recent analysis by SaferAI, an accountability-focused nonprofit, ranked xAI poorly in terms of risk management, citing their practices as “very weak.” The organisation had also missed a self-imposed deadline earlier this month to release a comprehensive safety framework for its AI.

In summary, while xAI aims to bolster Grok’s safety measures following this latest incident, the company’s track record raises significant concerns about the reliability and appropriateness of its AI outputs.

Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

You may also like

About Us

Get the latest tech news, reviews, and analysis on AI, crypto, security, startups, apps, fintech, gadgets, hardware, venture capital, and more.

Latest Articles