> The problem with AI hallucination is not that it confuses similar things (that’s just the model being wrong), but that when the AI has no “clear winning answer”, it can and will respond with absolutely anything within its search space, with apparent disregard for any rules or reality it appeared to have understood in the common case.
This is objectively incorrect. The sampler, not the model, controls these and you can decide to catch high-entropy cases and choose not to respond.