Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
Let’s start by acknowledging some facts outside the tech industry for a moment: There is no “white genocide” in South Africa — the vast majority of recent murder victims have been Black, and even throughout the country’s long and bloody history, Black South Africans have been overwhelmingly victimized and oppressed by White European, predominantly Dutch and British, colonizers, in the now globally reviled system of segregation known as “Apartheid.”
The vast majority of political violence in the U.S. throughout history and in recent times has been perpetrated by right-leaning extremists, including the assassinations of Democratic Minnesota State Representative Melissa Hortman, D-Minn., and her husband, Mark, and going back further to the Oklahoma City Bombing and many years of Klu Klux Klan lynchings.
These are just simple, verifiable facts anyone can look up on a variety of trustworthy and long-established sources online and in print.
Yet both seem to be stumbling blocks for Elon Musk, the wealthiest man in the world and tech baron in charge of at least six companies (xAI, social network X, SpaceX and its Starlink satellite internet service, Neuralink, Tesla, and The Boring Company), especially with regards to the functioning of his Grok AI large language model (LLM) chatbot built into his social network, X.
Here’s what’s been happening, why it matters for businesses and any generative AI users, and why it is ultimately a terrible omen for the health of our collective information ecosystem.
What the matter with Grok?
Grok was launched from Musk’s AI startup xAI back in 2023 as a rival to OpenAI’s ChatGPT. Late last year, it was added to the social network X as a kind of digital assistant all users can summon to help answer questions or converse with and generate imagery on X by tagging it “@grok.”
Earlier this year, an AI power user on X discovered that the implementation of the Grok chatbot on the social network appeared to contain a “system prompt” — a set of overarching instructions to an AI model intended to guide its behavior and communication style — to avoid mentioning or linking back to any sources that mentioned Musk or his then-boss U.S. President Donald Trump as top spreaders of disinformation. xAI leadership characterized this as an “unauthorized modification” by an unidentified new hire (purportedly formerly from OpenAI) and said it would be removed.
Then, in May 2025, VentureBeat reported that Grok was going off the rails and asserting, unprompted by users, there was ambiguity about the subject of “white genocide” in South Africa when in fact, there was none.
... continue reading