According to an investigation by the Guardian, ChatGPT quotes and paraphrases Elon Musk’s controversial AI encyclopedia Grokipedia. The supposed Wikipedia alternative has already been convicted several times of spreading false and manipulative content on sensitive topics such as the Holocaust, political conflicts or homosexuality. A commentary analysis
ChatGPT quotes Grokipedia
- Elon Musk has been claiming for some time that Wikipedia is not objective and follow a political left-wing orientation. With Grokipedia he wants to establish an alternative. The reality: There are studies that have proven both left-wing and right-wing political content on Wikipedia. Both are due to the free principle of the encyclopedia, which stipulates that the community can create content itself but also monitor it.
- Grokipedia does not follow any free principles. The Musk Encyclopedia is criticized right-wing populist narratives uncritically treated as facts to represent. Unlike Wikipedia, Grokipedia has no people who can create or control content. All contributions come from an AI, i.e. a pre-programmed algorithm – and it is pretty one-sided.
- Tests by the Guardian have shown that ChatGPT has recently started citing Grokipedia as a source. Including: Questions about the political structures in Iran, same-sex marriages and Holocaust deniers. ChatGPT used Grokipedia nine times for over a dozen such questions. The problem: It has already been proven several times that the platform reproduces content that is opinionated, misleading or has long been refuted.
ChatGPT is not a reliable source of information
The Guardian’s research is another prime example of this ChatGPT is not a reliable source of information. When it comes to the “quality” of the information, citing Grokipedia is a bit like copying the worst student in the class who steals his classmates’ recess money after the bell.
However, the problem goes beyond Grokipedia. Because Other AI chatbots are not immune to this either. The reason is the way algorithms work, which make decisions based on patterns and probabilities. Specifically, this means that ChatGPT and Co. can even be “outwitted” and deliberately infiltrated with false information by simply repeating the wrong thing often enough.
Sad example: According to the Guardian, many chatbots are already spreading Russian disinformation. These include the claim that the US is developing biological weapons in Ukraine. Completely different topic, but a good example of why sentiment often outweighs facts:
An editor of a right-wing populist German news magazine recently wrote in a post on X (formerly Twitter): “Six days of #power failure also means six days without e-mobility. The wrong horses are standing still.” That gas pumps for combustion engines don’t work without electricity? Free!
Whether intentional or not: the problem is that More and more people are stopping critical thinking. AI is already making a negative contribution to this – especially because once nonsense is spread, it sticks with many people. Any corrections subsequently run under the wildfire radar.
Voices
- A OpenAI speaker explained to the Guardian that the model’s web search “aims to draw from a variety of publicly available sources and viewpoints.” He added: “We apply security filters to reduce the risk of showing links with high potential for harm. ChatGPT uses citations to clearly show from which sources an answer comes.”
- Disinformation researcher Nina Jankowicz admitted that it was probably not Elon Musk’s intention to influence ChatGPT. But: The Grokipedia entries she and her colleagues checked are based “at best on unreliable sources, at worst on poorly researched and intentional misinformation.” “Most people won’t do the work necessary to find out where the truth actually lies.”
- The Guardian has too Grokipedia Group xAI asked for a statement on the AI Encyclopedia’s proven misinformation in order to give the company the opportunity to respond to the allegations. The answer, which is as trite as it is self-revealing: “Traditional media lies.”
AI models can be influenced
Experts view the development critically. Many have been warning for some time about so-called LLM grooming, in which large amounts of misleading content are deliberately placed online Influencing AI models.
What is particularly problematic is that sources like Grokipedia can gain additional credibility as a result. Misinformation once fed in is also almost impossible to stop.
Specifically, this means: Anyone who blindly trusts ChatGPT & Co. is surfing a wave of half-truths that will drown out subsequent corrections.
Both developers, media and politicians are therefore responsible. Not just to regulate, but to promote media literacy. Otherwise, so-called artificial intelligence will become more and more popular real stupidity care for.
Also interesting:

