I was just about to comment something similar to this:
"Google CEO Sundar Pichai said in a 60 minutes interview, 'one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know'. This is obviously wrong, so does that mean Pichai was hallucinating?"
At the last second I fact-checked myself, and found that it actually wasn't Pichai who said that. Crazy how close I came to confidently spewing bullshit in a comment about how humans can also confidently spew bullshit.
Anyway, my point is- to be on par with humans, LLMs don't need to be right all of the time, only some of the time.