Busà Photography/Moment via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
xAI's Grokipedia is now live.
Articles on the platform are written and "fact-checked" by Grok.
It differs from Wikipedia in several ways, but also rips it off.
On Monday, Elon Musk's xAI launched Grokipedia, an online information repository that's curated entirely by Grok, the company's flagship AI chatbot. Musk has been promoting the site as a kind of anti-Wikipedia, which he has said has "a non-trivial left-wing bias" and has derisively called "Wokepedia."
Also: Need the best AI content detector in 2025? Try these four tools (you probably already use one)
Since its debut in 2001, Wikipedia has amassed well over seven million articles, all of which are written and edited by human users. While the idea behind this model was to allow for the entry of a maximal diversity of perspectives and opinions -- a kind of self-correcting information machine that would incline toward truth -- many people, including the site's cofounder Jimmy Wales, have argued that Wikipedia has become ideologically biased in recent years.
Just as Grok was intended to be an anti-"woke" alternative to popular chatbots like ChatGPT and Gemini, xAI is promoting Grokipedia as a more balanced and reliable version of Wikipedia. But how does it stack up? I tried it out.
How it works
Grokipedia has a bare-bones aesthetic resembling Grok's user interface: I was greeted by simple search bar beneath the text "Grokipedia v0.1," suggesting that this is just the earliest iteration of the site and that others can be expected in the future.
Screenshot by Radhika Rajkumar/ZDNET
I entered a topic into the search bar and, similar to a search engine, Grokipedia showed me a list of page titles to select. As of Wednesday morning, a tally on the site's homepage indicated that it had over 885,000 articles. all of which are written and "fact-checked" -- according to a disclaimer above individual articles -- by Grok.
Also: You're reading more AI-generated content than you think
However, the Verge reported Monday that some Grokipedia articles include an inconspicuous disclaimer at the very bottom of the page stating that the content had been "adapted from Wikipedia, licensed under Creative Commons Attribution-ShareAlike 4.0 License." In other words, the site seems to be using content taken directly from the very site that it was intended to replace.
Users with an X account can highlight snippets of information found in a post and click a button reading "It's wrong" to flag inaccurate information, similar to the Community Notes approach both Meta and X traded actual fact-checking for early this year. xAI did not immediately respond to my request for comment on how user feedback shapes Grokipedia entries.
Concerns
Sticking to the facts isn't exactly AI's strong suit. A study published last week by the European Broadcasting Union and the BBC that analyzed thousands of news-related responses from four leading AI chatbots (not including Grok) found that close to half had at least one major issue, including hallucinating (generating inaccurate information disguised as truth), providing outdated information, and using flawed sourcing methods.
Also: Get your news from AI? Watch out - it's wrong almost half the time
While Grok was engineered "to maximize truth and objectivity," according to xAI, it's important to remember that it's an LLM, and like all LLMs, it's imperfect.
According to one public leaderboard comparing the hallucination rates of frontier models when prompted to perform a simple document-summarization task, Grok 2 currently scores comparatively high, meaning it's more prone to hallucination than many other leading models. Grok 4, the newest iteration of the model, currently ranks in the 99th spot on that same leaderboard, just ahead of OpenAI's o4-mini-high and just behind Microsoft's Phi-4.
Screenshot: GitHub
xAI also trained Grok in part using public posts and other information taken from X, which xAI has described as "a unique and fundamental advantage" for the chatbot. A study posted to the preprint server site arXiv earlier this month, however, found a causal relationship between training data gleaned from "junk" social media content -- think: high-engagement, low-quality posts -- and an inclination toward the digital analogue of "brain rot": a noticable decline in the trustworthiness of model outputs, and an increase in "dark traits," such as psychopathy.
Content sourcing questions
Then there's the fact that many users have reported incidents in which Grok appeared to consult Musk's social media posts or articles written about the billionaire before responding to certain sensitive prompts. Similarly, Grokipedia appears in some important respects to reflect Musk's personal, political, and ideological opinions.
Also: Why open source may not survive the rise of generative AI
For example, Musk has been outspoken about what he regards as the existential risk posed by declining birth rates, which he described in a December X post as "the biggest danger to human civilization by far." I came across a Grokipedia entry on "Societal collapse" that includes a lengthy discussion on the destabilizing social effects of birth rate decline throughout history, which goes on to describe how "mass immigration" compounds the problem. I found a corresponding Wikipedia entry that included no mention of a causal role between birth rate decline and societal collapse.
Screenshot: Grokipedia
Another Grokipedia entry on Grok echoed Musk's own idealistic views of the chatbot as an anti-"woke" foil to its competitors.
Want more stories about AI? Sign up for our AI Leaderboard newsletter.
"Unlike many competing AI systems, Grok is engineered for maximal truthfulness, incorporating humor, a rebellious approach to spicy or rejected queries, and resistance to conventional political biases," the entry reads.
Bottom line
Ultimately, any resource devoted to the gathering and cataloging of information that's built by humans is going to reflect some degree of human bias. The decision of which tools you should use, then, should come down to which ones you consider to have the most effective and trustworthy error correction mechanisms in place.
Also: I've been testing the top AI browsers - here's which ones actually impressed me
As any college professor will tell you, you shouldn't trust Wikipedia as a primary source of information. It might be useful for getting a high-level overview of a person, place, thing, or historical event, but the double-edged nature of it being an open-source database means that there's plenty of nonsense on the site, too. You should always corroborate anything you read on Wikipedia by checking more reliable accounts, like peer-reviewed studies or articles published by reputable news outlets whose editors are devoted to journalistic integrity.
That said, I personally would place more faith in a website curated by human writers and editors than a site like Grokipedia, which is managed by an LLM, with all of its foibles. And that would apply to any LLM, not just Grok. Until we have models that are totally hallucination-proof -- and it remains to be seen if that's even technically possible -- you should always take any response you get from one with a hearty grain of salt, because there's a nonzero chance it's lying to you.
Even without hallucinations, AI models can carry biases imported from their training data, which can influence their responses.