There's a fundamental difference between "some people are wrong some of the time"—or even "half the population has trouble telling fact from fiction some of the time", if we grant that as true for the sake of argument—and "ChatGPT (and similar ML algorithms) don't even have a metric to determine truth from fiction; they just predict what's the most likely set of words to stick together in response to your prompt."
ChatGPT fundamentally cannot ever know when it's wrong. I should hope it goes without saying that that's not true of humans.