ChatGPT is now drinking from Elon Musk's fountain (and nobody told you)

· 5 min read · Read in Español
Share:

It’s been discovered that ChatGPT is using data from Grokipedia, xAI’s encyclopedia. The uncomfortable question: who decides what’s “truth” for your AI assistant?


This week something leaked that OpenAI hasn’t officially announced: ChatGPT is showing responses based on Grokipedia, the AI-generated encyclopedia owned by xAI, Elon Musk’s company.

Read that again. The world’s most used chatbot is citing as a source an encyclopedia created by its competitor’s AI. A competitor who, by the way, has very particular opinions on quite a few topics.

Does that seem concerning? It should.

What is Grokipedia

Grokipedia is an xAI project intended to be an alternative to Wikipedia. The fundamental difference: while Wikipedia is edited by humans (with all their biases and edit wars), Grokipedia is generated by Grok, xAI’s language model.

The promise is content “without censorship” and “without politically correct biases.” In practice, it means content with Elon Musk’s biases and those of the data Grok was trained on.

I’m not saying Wikipedia is perfect. It has its known problems: edit wars, systematic biases, highly variable article quality. But at least there’s a human review process, cited sources, and transparency about who edits what.

Grokipedia is a black box. You don’t know what sources it uses, what criteria it applies, or who supervises the content.

Why this matters

When you ask ChatGPT something, you assume the answer comes from somewhere reasonable. Maybe from its training, maybe from a web search, maybe from documents you’ve given it.

What you don’t expect is for it to come from an encyclopedia generated by another company’s AI, without anyone telling you, and without being able to verify the sources.

Imagine these scenarios:

Scenario 1: Questions about politics. Grok was trained on X (Twitter) data. X has a certain… political inclination since Musk bought it. If Grokipedia reflects that inclination, and ChatGPT cites Grokipedia, you’re receiving biased information without knowing it.

Scenario 2: Questions about companies. What happens if you ask about Tesla, SpaceX, or any company Musk has interests in? Will the information be neutral?

Scenario 3: Scientific questions. Musk has expressed… heterodox opinions on various scientific topics. If those opinions filter into Grokipedia and from there to ChatGPT, we have a problem.

I’m not saying this is happening. I’m saying we have no way of knowing.

The underlying problem: where does “truth” come from?

This particular case is striking because of the drama of “Musk vs OpenAI.” But the underlying problem is much broader.

All LLMs have this problem. They’re trained on internet data, and the internet is full of garbage, biases, and misinformation. Models absorb all of that and regurgitate it with expert confidence. I’ve written about the types of failures LLMs make and hallucinations are just the tip of the iceberg.

When ChatGPT tells you something, you don’t know:

  • What sources that information comes from
  • What criteria were used to decide what’s true
  • Whether there are conflicts of interest in the sources
  • When that information was last updated

And the Grokipedia integration adds another layer of opacity.

What AI companies should do

Source transparency. When a model cites factual information, it should indicate where it comes from. It’s not impossible: the web search functionality of ChatGPT and Claude already do this partially.

Source diversity. Depending on a single source (whether Wikipedia, Grokipedia, or any other) is a risk. Responses should triangulate multiple sources.

Conflict disclosure. If OpenAI has agreements with xAI to use Grokipedia, users should know.

Opt-out. Users should be able to choose which sources they want their assistant to use.

Will any of this happen? Probably not short-term. But we should demand it.

What we can do

As users of these tools:

Don’t trust blindly. An LLM is not a source of truth. It’s a very sophisticated statistical parrot. Verify important information through other means.

Ask for sources. When ChatGPT or Claude give you factual information, ask them to cite sources. They won’t always be able to, but when they do, check them.

Be aware of biases. All models have biases. Some more than others. Knowing where the data comes from helps you calibrate responses. I already wrote about why you shouldn’t be a fanboy of any model.

Diversify tools. Don’t use only ChatGPT. Compare responses between models. The differences are revealing.

The uncomfortable question

Ultimately, the question we should ask ourselves is: who do we want to decide what’s “truth” in the AI era?

OpenAI? Google? Elon Musk? The Chinese government (for DeepSeek users)?

No answer is good. All involve ceding control of information to entities with their own interests.

The only reasonable answer is: nobody should decide for us. We need transparency, source diversity, and keeping critical thinking active.

Because if we delegate truth to a chatbot, let’s not complain later about what it tells us.


Are you concerned about LLM neutrality? Do you verify sources when using ChatGPT or Claude? I’m interested in your opinion.

Found this useful? Share it

Share:

You might also like