Seriously, you're veering into sophistry.
People have reputations. They cite sources. Unless they're compulsive liars, they don't tend to just make stuff up on the spot based on what will be probabilistically pleasing to you.
There are countless examples of ChatGPT not just making mistakes but making up "facts" entirely from whole cloth, not based on misunderstanding or bias or anything else, but simply because the math says it's the best way to complete a sentence.
Let's not use vacuous arguments to dismiss that very real concern.
Edit: As an aside, it somehow only now just occurred to me that LLM bullshit generation may actually be more insidious than the human-generated variety as LLMs are specifically trained to create language that's pleasing, which means it's going to try to make sure it sounds right, and therefore the misinformation may turn out to be more subtle and convincing...