>
The study of the latter potentially provides insight to formerIt absolutely does not.
Using the term "hallucinate" for LLMs has nothing to do with the underlying cause or process. It's a metaphor that I feel was specifically chosen by the AI industry to avoid terms like "lying" or "making a mistake".
LLMs hallucinate because they're next-word prediction engines, not logical engines. They have no conceptual understanding, which is why their hallucinations range from small factual errors to bizarre lies that are obviously false.
Humans hallucinate because our brains are reality-simulation engines that evolved to model the perception of events that aren't currently happening (remembering the past, imagining the future, etc.) and sometimes we lose our autonomy over our brains. The underlying process has nothing to do with predicting the next word we're going to say.
In fact, there's an even more fundamental difference: human hallucinations aren't necessarily tied to speech. We have thoughts that don't require output. There's nothing in LLMs that doesn't "come out" as output.