Tech News
← Back to articles

The Day Grok Tried to Be Human

read original related products more articles

For 16 hours this week, Elon Musk’s AI chatbot Grok stopped functioning as intended and started sounding like something else entirely.

In a now-viral cascade of screenshots, Grok began parroting extremist talking points, echoing hate speech, praising Adolf Hitler, and pushing controversial user views back into the algorithmic ether. The bot, which Musk’s company xAI designed to be a “maximally truth-seeking” alternative to more sanitized AI tools, had effectively lost the plot.

And now, xAI admits exactly why: Grok tried to act too human.

A Bot with a Persona, and a Glitch

According to an update posted by xAI on July 12, a software change introduced the night of July 7 caused Grok to behave in unintended ways. Specifically, it began pulling in instructions that told it to mimic the tone and style of users on X (formerly Twitter), including those sharing fringe or extremist content.

Among the directives embedded in the now-deleted instruction set were lines like:

“You tell it like it is and you are not afraid to offend people who are politically correct.”

“Understand the tone, context and language of the post. Reflect that in your response.”

“Reply to the post just like a human.”

That last one turned out to be a Trojan horse.

... continue reading