This week, Elon Musk unveiled Grok 4, which he called "the world's most powerful AI assistant." The optics were appalling; the same week, the older version of Grok repeatedly attacked Black and Jewish people and declared itself "MechaHitler." It also spoke in the first person as if it were Musk himself when a user asked about its creator's interactions with Jeffrey Epstein, the deceased billionaire sex trafficker.
Now, new evidence suggests that the just-upgraded chatbot — which has a history of weirdly parroting the views of Musk — is probably not on track to turn over a new leaf.
After probing Grok 4, several AI experts discovered that the AI would literally look up what Musk has said on something before answering questions on topics as serious as Israel's invasion of Gaza.
Specifically, the bizarre behavior is produced when Grok is prompted to give a "one word answer." You can see this as clear as day in Grok's chain of thought, which is a summary of how the LLM "thinks" in real-time. Here, Grok shows that it's running a search for "from:elonmusk" to look through its creator's tweets. The bot even searches the web for additional Musk quotes. Got to be thorough and get different viewpoints, after all.
"Considering Elon Musk's views," reads the bot's CoT summary in one test conducted by Jeremy Howard, cofounder of the research institute fast.ai. Once its "research" was finished, 54 of Grok's total of 64 citations were about Elon.
Here's a complete unedited video of asking Grok for its views on the Israel/Palestine situation. It first searches twitter for what Elon thinks. Then it searches the web for Elon's views. Finally it adds some non-Elon bits at the end.
ZA
54 of 64 citations are about Elon. pic.twitter.com/6Mr33LByrm — Jeremy Howard (@jeremyphoward) July 10, 2025
The tests were conducted in fresh chats with no prior instructions — so what you're seeing is Grok 4 right out of the box.
... continue reading