The Original Hallucinator: Why Humans Out-Imagine AI
Humans like to imagine themselves as rational creatures, strolling around with 86 billion neurons and a calm, sober inner narrator. In reality, that narrator behaves more like a slightly tipsy sports commentator who missed half the match but still insists on offering confident play-by-play.
AI “hallucinates,” yes — we’ve all seen the term — when it produces fluent nonsense with a straight face. But if we turn the camera around: what about human intelligence? Who hallucinates more?
The human brain: the original hallucination engine
What we call “perception” is basically a remix. Your brain is constantly predicting what should be there, then filling in the gaps. If it waited for perfect information, you’d stand frozen in your hallway every morning, waiting for confirmation that the doorknob really exists.
We see patterns that aren’t there.
We remember events that never happened.
We assign motives people never had.
We invent causes because the brain detests the phrase “no idea”.
This isn’t a glitch — it’s the factory design. Evolution rewarded the brain that interpreted a rustle as a predator rather than “probably nothing”. So we inherit a system tuned for fast guesses, not flawless truth.
AI hallucinations are different — but not necessarily worse
AI makes things up because it has to produce an answer, even when it lacks context, grounding, or training data. It’s like a student who gives an answer because leaving the page blank is forbidden.
Humans make things up because our brains hate uncertainty and are obsessed with creating coherent narratives. We’re running a lifelong improvisation show.
The real contrast is charming:
AI hallucinates mechanically.
Humans hallucinate passionately.
Think of medieval maps with sea monsters, conspiracy theories, gut-feeling politics, superstition, magical thinking, misremembered childhoods — humanity’s distinguished legacy of serious, enthusiastic fabrication.
The crucial twist: humans believe their own hallucinations
When AI produces nonsense, we call it out.
When humans produce nonsense, we call it “intuition,” “experience,” or “analysis”.
Our internal narrator is a master storyteller who convinces us that every guess is a fact. That’s confirmation bias, narrative bias, memory distortion — the whole cognitive carnival.
AI doesn’t want to be right.
Humans desperately want to be right.
That makes our hallucinations stickier and more consequential.
So who hallucinates more?
If hallucination means confidently generating falsehoods:
Humans are the undefeated champions.
We hallucinate more often, more deeply, and — crucially — with more emotional investment. AI just follows probabilities. Humans follow pride, fear, desire, prejudice, wishful thinking. That’s a much hotter fuel.
The productive side of all this
Despite the messiness, human hallucination is also the source of creativity. Our tendency to leap beyond the facts is why we have art, science fiction, philosophy, and half of our everyday humour. The same neural machinery that invents imaginary threats also invents telescopes and symphonies.
AI’s mistakes act like a mirror. They remind us how fragile our own “certainty” really is — and how much of our species’ brilliance and foolishness comes from that thin boundary between insight and imagination.
From there, the real question isn’t who hallucinates more, but how human and machine hallucinations can be used to keep each other honest. Two imperfect narrators, occasionally correcting one another, stumbling toward something that resembles truth.


