What does it mean when an AI is "hallucinating"?

We say an AI is “hallucinating” (also known as “confabulating”1) when it makes up plausible-sounding but false information in response to a query.

Source. The name is correct, but the record time and date are wrong, and Wandratsch’s crossing was done by swimming, rather than walking (or any combination of the two).

As of 2024, even the most powerful LLMs still hallucinate sometimes, though they seem to hallucinate less than past LLMs.

One technique used to mitigate hallucinations is RAG (retrieval-augmented generation).


  1. While the phenomenon has historically usually been called “hallucination”, “confabulation” is arguably the more accurate term, as it just means “making up a story” and doesn’t imply that the AI is reporting false sensory experiences. ↩︎