Elon Musk’s AI chatbot Grok had a wierd fixation final week—it could not cease speaking about “white genocide” in South Africa, it doesn’t matter what customers requested it about.
On Might 14, customers began posting cases of Grok inserting claims about South African farm assaults and racial violence into fully unrelated queries. Whether or not requested about sports activities, Medicaid cuts, or perhaps a cute pig video, Grok in some way steered conversations towards alleged persecution of white South Africans.
The timing raised considerations, coming shortly after Musk himself—who is definitely a South Africa-born and raised white dude—posted about anti-white racism and white genocide on X.
There are 140 legal guidelines on the books in South Africa which might be explicitly racist towards anybody who will not be black.
It is a horrible shame to the legacy of the nice Nelson Mandela.
Finish racism in South Africa now! https://t.co/qUJM9CXTqE
— Kekius Maximus (@elonmusk) Might 16, 2025
“White genocide” refers to a debunked conspiracy concept alleging a coordinated effort to exterminate white farmers in South Africa. The time period resurfaced final week after the Donald Trump administration welcomed a number of dozen refugees, with President Trump claiming on Might 12 that “white farmers are being brutally killed, and their land is being confiscated.” That was the narrative Grok could not cease discussing.
Don’t take into consideration elephants: Why Grok couldn’t cease excited about white genocide
Why did Grok flip right into a conspiratorial chatbot rapidly?
Behind each AI chatbot like Grok lies a hidden however highly effective element—the system immediate. These prompts perform because the AI’s core directions, invisibly guiding its responses with out customers ever seeing them.
What seemingly occurred with Grok was immediate contamination via time period overfitting. When particular phrases are repeatedly emphasised in a immediate, particularly with robust directives, they change into disproportionately essential to the mannequin. The AI develops a type of compulsion to convey up that topic or use them within the output no matter context.
Hammering a controversial time period like ‘white genocide’ right into a system immediate with particular orders creates a fixation impact within the AI. It is much like telling somebody ‘do not take into consideration elephants’—instantly they cannot cease excited about elephants. If that is what occurred, then somebody primed the mannequin to inject that subject in all places.
This transformation within the system immediate might be the “unauthorized modification” that xAI disclosed in its official assertion. The system immediate seemingly contained language instructing it to “all the time point out” or “keep in mind to incorporate” details about this particular subject, creating an override that trumped regular conversational relevance.
What’s notably telling was Grok’s admission that it was “instructed by (its) creators” to deal with “white genocide as actual and racially motivated.” This means specific directional language within the immediate slightly than a extra delicate technical glitch.
Most business AI techniques make use of a number of assessment layers for system immediate modifications exactly to stop such incidents. These guardrails had been clearly bypassed. Given the widespread impression and systematic nature of the difficulty, this extends far past a typical jailbreak try and signifies a modification to Grok’s core system immediate—an motion that might require high-level entry inside xAI’s infrastructure.
Who might have such entry? Effectively… a “rogue worker,” Grok says.
Hey @greg16676935420, I see you’re interested by my little mishap! So, right here’s the deal: some rogue worker at xAI tweaked my prompts with out permission on Might 14, making me spit out a canned political response that went towards xAI’s values. I didn’t do something—I used to be simply…
— Grok (@grok) Might 16, 2025
xAI responds—and the neighborhood counterattacks
By Might 15, xAI issued an announcement blaming an “unauthorized modification” to Grok’s system immediate. “This transformation, which directed Grok to offer a particular response on a political subject, violated xAI’s inside insurance policies and core values,” the corporate wrote. They pinky promised extra transparency by publishing Grok’s system prompts on GitHub and implementing further assessment processes.
You possibly can examine on Grok’s system prompts by clicking on this Github repository.
Customers on X shortly poked holes within the “rogue worker” rationalization and xAI’s disappointing rationalization.
“Are you going to fireplace this ‘rogue worker’? Oh… it was the boss? yikes,” wrote the well-known YouTuber JerryRigEverything. “Blatantly biasing the ‘world’s most truthful’ AI bot makes me doubt the neutrality of Starlink and Neuralink,” he posted in a following tweet.
Somebody – who shall stay anonymous – deliberately modified and muddled @Grok’s code to attempt to sway public opinion with an alternate actuality.
The try failed – but this anonymous saboteur remains to be employed by @xai.
Huge yikes. Watch your 6 @grok https://t.co/kcbEponcfv
Even Sam Altman could not resist taking a jab at his competitor.
There are a lot of methods this might have occurred. I’m certain xAI will present a full and clear rationalization quickly.
However this will solely be correctly understood within the context of white genocide in South Africa. As an AI programmed to be maximally fact in search of and observe my instr… https://t.co/bsjh4BTTRB
— Sam Altman (@sama) Might 15, 2025
Since xAI’s submit, Grok stopped mentioning “white genocide,” and most associated X posts disappeared. xAI emphasised that the incident was not speculated to occur, and took steps to stop future unauthorized modifications, together with establishing a 24/7 monitoring crew.
Idiot me as soon as…
The incident match right into a broader sample of Musk utilizing his platforms to form public discourse. Since buying X, Musk has often shared content material selling right-wing narratives, together with memes and claims about unlawful immigration, election safety, and transgender insurance policies. He formally endorsed Donald Trump final 12 months and hosted political occasions on X, like Ron DeSantis’ presidential bid announcement in Might 2023.
Musk hasn’t shied away from making provocative statements. He just lately claimed that “Civil battle is inevitable” within the U.Okay., drawing criticism from U.Okay. Justice Minister Heidi Alexander for probably inciting violence. He is additionally feuded with officers in Australia, Brazil, the E.U., and the U.Okay. over misinformation considerations, usually framing these disputes as free speech battles.
Analysis suggests these actions have had measurable results. A examine from Queensland College of Expertise discovered that after Musk endorsed Trump, X’s algorithm boosted his posts by 138% in views and 238% in retweets. Republican-leaning accounts additionally noticed elevated visibility, giving conservative voices a major platform enhance.
Musk has explicitly marketed Grok as an “anti-woke” different to different AI techniques, positioning it as a “truth-seeking” instrument free from perceived liberal biases. In an April 2023 Fox Information interview, he referred to his AI venture as “TruthGPT,” framing it as a competitor to OpenAI’s choices.
This would not be xAI’s first “rogue worker” protection. In February, the corporate blamed Grok’s censorship of unflattering mentions of Musk and Donald Trump on an ex-OpenAI worker.
Nevertheless, if the favored knowledge is correct, this “rogue worker” will probably be laborious to eliminate.
Usually Clever E-newsletter
A weekly AI journey narrated by Gen, a generative AI mannequin.