The article is even worse than the one on Wikipedia. It follows the same structure but fails to tell a coherent story. It references random people on Reddit (!) that don't even support the point it's trying to make. Not that the information on Reddit is particularly good to begin with, even it it were properly interpreted. It cites Forbes articles parroting pretty insane and unsubstantiated claims, I thought mainstream media was not to be trusted?
In the end it's longer, written in a weird style, and doesn't really bring any value. Asking Grok about about the same topic and instructing it to be succinct yields much better results.
Today the page I linked in my HN post is completely gone.
But worse: yesterday tumblr user sophieinwonderland found that they were quoted as a source on Multiplicity [1]. Tumblr is definitely not a reliable source and I don't mean to throw shade on sophieinwonderland who might very well be an expert on that topic.
[0] https://news.ycombinator.com/item?id=45743033
[1] https://www.tumblr.com/sophieinwonderland/798920803075883008...
If there ever was a tool suited just perfectly for mass manipulation, it’s an LLM-written collection of all human knowledge, controlled by a clever, cynical, and misanthropic asshole with a god complex.
> My Grokipedia entry has over seven thousand words, compared to a mere 1,300 in my Wikipedia article
Nah dude, what you're describing from LLMs is terrible writing. Just because it has good grammar and punctuation doesn't make it good, for exactly the reasons you listed. Good writing pulls you through.
One of the skills of working with the form, which I'm still developing, is the ability to frame follow-on questions in a specific enough way to prevent the BS engine from engaging. Sometimes I find myself asking it questions using jargon I 100% know is wrong just because the answer will tell me what the phrasing it wants to hear is.
If I were doing a project like this, I would hire a few dozen topical experts to go over the WP articles relevant to their fields and comment on their biases rather than waste their time rewriting the articles from scratch. The results can then be published as a study, and can probably be used to shame the WP into cleaning their shit up, without needlessly duplicating the 90% of the work that it has been doing well.
Just as LLM's lack the capacity for basic logic, they also lack the kind of judgment required to pare down a topic to what is of interest to humans. I don't know if this is an insurmountable shortcoming of LLM's, but it certainly seems to be a brick wall for the current bunch.
-------------
The technology to make Grokipedia work isn't there yet. However, my real concern is the problem Grokipedia is intended to solve: Musk wants his own version of Wikipedia, with a political slant of his liking, and without any pesky human authors. He also clearly wants Wikipedia taken down[1]. This is reality control for billionaires.
Perhaps LLM generated encyclopedias could be useful, but what Musk is trying to do makes it absolutely clear that we will need to continue carefully evaluating any sources we use for bias. If Musk wants to reframe the sum of human knowledge because he doesn't like being called out for his sieg heils, only a fool would place any trust in the result.
[1]https://www.lemonde.fr/en/pixels/article/2025/01/29/why-elon...
Not to beat a dead horse, but one really could wake up one day and find out we've always been at war with Oceana after the flip of a switch in an LLM encyclopedia.
Asking an LLM to reprocess it again is only going to add error.
And if you believe that you’ll believe anything. “Try to _change_ the bias” would be closer.
what if your goal is for wikipedia to be biased in your favor?
One thing I love about the Wikipedias (plural, as they're all different orgs): anyone "in the know" can very quickly tell who's got no practical knowledge of Wikipedia's structure, rules, customs, and practices to begin with. What you're proposing like it's some sort of Big Beautiful Idea has already been done countless times, is being done, and will be done for as long as Wikis exist.
And Groggypedia? It's nothing more but a pathetic vanity project of an equally pathetic manbaby for people who think LLM-slop continously fine-tuned to reflect the bias of their guru, and the tool's owner, is a Seal of Quality.
Completely agree with the first purpose but would never use wikipedia for the second purpose. Its only good at basics and cannot handle complex information well.
Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?
[1] In fact, talk pages are often ground zero!
This is a good use of wikipedia: "Like, if there's a subject about which you aren't personally an expert, and you have the choice between reading a single review paper you found on Google or the Wikipedia page, which are you going to choose?"
But that is like skim reading or basic introductions rather than in-depth understanding.
Poppycock! Because of MediaWiki's multimedia capabilities it can handle complex information just fine, obviously much better than printed predecessors. What you mean is a Wiki's focus, which can take the form of a generalized or universal encylopedia (e. g. Wikipedia), or a specialized one, or a free-form one (Wikipedia, in practice, again). Wikipedias even negotiate integration of different information streams, e. g. up-to-date news-like information, both in the lemmata (often a huge problem, i. e. "newstickeritis"), in its own news wiki (Wikinews), or the English Wikipedia's newspaper, The Signpost.
And to take care of another utterly bizarre comment: Encylopedias are always, per defintion, also repositories of knowledge.
"And to take care of another utterly bizarre comment: Encylopedias are always, per defintion, also repositories of knowledge."
We should just accept wholescale editing and knowledge production when we personally agree with it? Otherwise its verboten? You are aware of "edit-a-thons"? Are these biased?
Why not just have an AI print all of the currently available information on a topic with minimised (not zero) biases?
If we lived in a utopia where wikipedia could randomly allocate tasks to a diverse group of expert level civilians and then aggregate these takes/edits into a full description of a topic I would agree with wikipedia maximalists. This does not happen and a bunch of bad or naive actors have reduced the quality.
So instead of a Truth-maximizing AI, it's an Elon-maximizing AI.
>Another was that if you ask it “What do you think?” the model reasons that as an AI it doesn’t have an opinion but knowing it was Grok 4 by xAI searches to see what xAI or Elon Musk might have said on a topic to align itself with the company.
The diff for the mitigation is here: https://github.com/xai-org/grok-prompts/commit/e517db8b4b253...
And when asked by right wing people about an embarrassing Grok response that refutes their view, Elon has agreed it's a problem and said he is "working on it".
(Unfortunately, Reply-Grok may have been successfully partially lobotomized for the long term, now. At the time of writing, if you ask grok.com about the 2020 election it says Biden won and Trump's fraud claims are not substantiated and have no merit. If you @grok in a tweet it now says Trump's claims of fraud have significant merit, when previously it did not. Over the past few days I've seen it place way too much charity in right-wing framings in other instances, as well.)
They could pull the rug at any future time and its almost better to gain trust now and cash in that trust later.
It feels like we've reached Peak Stupidity but it's clear it can (and likely will) get much worse with AI videos.
Well, no, it hasn’t. It has debunked some things. It has made some incorrect shit up. But it isn’t historically one of the “biggest debunkers” of anything. Do we only speak hyperbole now?
In theory, using LLMs to summarize knowledge could produce a less biased and more comprehensive output than human-written encyclopedias.
Whether Grokipedia will meet that challenge remains to be seen. But even if it doesn't, there's opportunity for other prospective encyclopedia generators to do so.
Humans looking through sources, applying knowledge of print articles and real world experiences to sift through the data, that seems far more valuable.
Amazing that Musk did it first. (Although it was suggested to him as part of an interview a month before release).
These systems are very good at finding obscure references that were overlooked by mere mortals.
Is it though?
LLMs are great at answering questions based on information you make available to them, especially if you have the instincts and skill to spot when they are likely to make mistakes and to fact-check key details yourself.
That doesn't mean that using them to build a knowledge base itself is a good idea! We need reliable, verified knowledge bases that LLMs can make use-of.
Wah? LLMs don't collect things.
I mean, if any of these AI companies want to open up all their training data as a searchable archive, I'd be all for it.
I also asked ChatGPT and Claude: https://chatgpt.com/share/6902ef7b-96fc-800c-ab26-9f2a0304af...
https://claude.ai/share/3fb2aa34-316c-431e-ab64-0738dd84873e
One great feature of Wikipedia is being able to download it and query a local shapshot.
As a technical matter, Grokipedia could do something like that, eventually. Does not appear to support snapshots at the 0.1 version.
Guess that's plus one for "it doesn't matter what they say as long as they say."
According to the Manhattan Institute as cited by the Economist, even grok has a leftwards bias (roughly even to all the other big models).
https://www.economist.com/international/2025/08/28/donald-tr...
When you are far enough to the right, everything has a left bias, and even the degrees become hard to distinguish.
But I have a bad habit of fact checking. It’s the engineer in me. You tell me something, I instinctively verify. In the linked article, sub-section, ‘References’, Mr. Bray opines about a reference not directly relating to the content cited. So I went to Grokipedia for the first time ever and checked.
Mr. Bray’s quote of a quote he quote couldn’t find is misleading. The sentence on Grokipedia includes 2 referencee of which he includes only the first. This first reference relates to his work with the FTC. The second part of the sentence relates to the second reference. Specifically on Grokipedia in the Tim Bray article linked reference number 50, paragraph 756 cleanly addresses the issue raised by Mr. Bray.
After that I stopped reading, still don’t know or care who Tim Bray is and don’t plan on using either Grokipedia or Grok in the near future.
Perhaps Mr. Bray’s did not fully explore the references or perhaps there was malice. I don’t know. Horseshoe theory applies. Pure pro- positions and pure anti- positions are idiotic and should be filtered accordingly. Filter thusly applied.
Tim Bray’s Grokipedia: https://grokipedia.com/page/Tim_Bray
Relevant text: Serving as the FTC's infrastructure expert, he testified on technical aspects such as service speed and user perceptions of responsiveness, assessing potential competitive harms from reduced incentives for innovation post-acquisition; his declaration, referenced in court filings, emphasized empirical metrics over speculative harms.[49][50]
[49] https://www.ftc.gov/system/files/ftc_gov/pdf/FTCReplytoMetaR...
[50] https://dpo-india.com/Resources/USA_Court_Judgements_Against...
Paragraph 756: Tim Bray, the FTC’s proffered infrastructure expert, opined that “[u]sers’ perceptions of how quickly an online product responds to requests is an important component of the quality of their experience,” and that the delay between a user request and an online product’s response is commonly referred to as latency. Ex. 288 at ¶ 98 (Bray Rep.). Mr. Krieger testified that Instagram saw a “significant latency reduction post-Instagration,” a term referring to Instagram’s migration to Meta’s data servers. Ex. 153 at 76:24-77:5, 287:3-20 (Krieger Dep. Tr.). He prepared a presentation in 2014 stating that there was a “75% latency reduction in our core ‘hot path’ in rendering feeds” after the integration.
I hope we can keep growing freely available sources of information. Even if some of that information is incorrect or flat out manipulative. This isn't anything new. It's what the web has always been
At least Grokipedia tries to look like it was written with the intent to inform, not spoonfeed an opinion.
In addition, Grokipedia isn't encumbered by a Perennial Sources List[0] whose "generally reliable" section consists entirely of center and/or center-left media sources, and seems to be entirely purposed for gatekeeping.
The web site of the US television news network with by far the most viewership (Fox) was moved from "generally reliable" to "marginally reliable" for scientific and political claims, while MSNBC and CNN remain "generally reliable". This fact is laughable, considering MSNBC and CNN's mutual refusal to report on things like the Arctic Frost[1] (currently) and Hunter Biden laptop[2] (historically) conspiracies initiated under the Biden administration. Fox reported on both, but is not allowed as a source despite being the only major news network to not suppress the stories.
When an "encyclopedia" only allows unrestricted use of sources that fail to report information on notable news (such as conspiracies that are more far-reaching than Watergate), the encyclopedia will become less used by people because they no longer trust its new organizational and editorial biases.
Some folks, including myself, rarely reference Wikipedia anymore, because it often doesn't have the information being researched, and even if it does, we can't be sure we're getting very much (or any!) of the full story. This is broadly demonstrated by Wikipedia's constant decline in traffic from 2022 (~165M visits/day) through the present (~128M visits/day)[3].
[0] https://en.wikipedia.org/wiki/Perennial_sources_list [1] https://www.grassley.senate.gov/news/news-releases/new-jack-... [2] https://grokipedia.com/page/Hunter_Biden_laptop_controversy [3] https://datareportal.com/reports/digital-2025-exploring-tren...
>This is broadly demonstrated by Wikipedia's constant decline in traffic from 2022 (~165M visits/day) through the present (~128M visits/day)[3].
This demonstrates only the decrease in web traffic, and there are plenty of discussions about the reasons why and I suspect that conservatives didn't all of a sudden decide to hate wikipedia starting in 2022 as you seem to imply.
I personally treat these things the same way I treat car accidents: if an autonomous system still has accidents but has less than human drivers do, it’s a success. Given the amount of nonsense and factually incorrect things people spout, I’d still call Grok even at this early stage a major success.
Also I’m a big fan of how it ties nuanced details to better present a comprehensive story. I read both TBray’s Wiki and Groki entries. The Groki version has some solid info that I suppose I should expect of an AI that can pull a larger corpus of data in. A human editor would of course omit that, or change it, and then Wiki admins would have to lock the page as changes erupt into a silly flame war over what’s factually accurate. Because we can’t seem to agree.
Anyway - good stuff! Looking forward to more of Grok. Very fitting name, actually.
However it's still 0.1, we'll see what the v1 will look like.
And according to Tim Bray, it's doing that badly.
> All the references are just URLs and at least some of them entirely fail to support the text.
And a lot of us would be better off releasing our dumb ideas too. The world has a lot of issues and if all you do is talk down and don't try to fix anything yourself. Maybe it's time to get off the web a little and do something else.
One wishes Musk would take this advice: leave the web alone, forget for a few months about the social media popularity contest that seems to occupy his mind 24/7, and focus on rekindling his passion for rockets or roadsters or whatever middle-aged pursuit comes next.
He's a horrible human being but has had a couple worthy ideas.