Consuming
and producing vast amounts of information is what makes the problem potentially worse than human groupthink. It enables the situation where AI is mostly consuming information produced by AI. That's the feedback loop I'm calling "groupthink." It could end up diverging from reality in the same way that chaotic functions diverge widely due to tiny differences in the initial conditions. The same problem exists if the AI consumes other types of information that it also produces.
Humans are more grounded by having a presence in the physical world. Plus they draw on various sources considered more reliable, like formal training, scientific papers, textbooks, quality journalism, etc. If we want AI to be reliable, we'll need it to put the most weight on similar sources, and maybe even have some real-world presence with sensors and robots.
Eventually AI will be able to produce new reliable information itself. But for that, it would have to recognize factual inconsistencies between sources and logical inconsistencies in arguments, and figure out how to resolve those, and do math correctly. I don't know what the state of the art is here, though ChatGPT tends to fail at basic arithmetic.