No, that's not really the case. I don't think you should trust LLM output at all, but I think in general it's closer to the level of reliability of wikipedia than it is to producing useless bullshit.
Which is to say that it's useful, but you shouldn't trust it without double checking.