The concern is the increasing number of anecdotes where people are taking ChatGPT's output at face value, even though everyone knows by now that it does sometimes get stuff wrong and does hallucinate. I thought it would take longer.
A very annoying trend I've already started noticing on some forums (including here unfortunately) is someone asking a question and one of the replies is someone saying "I put your question into ChatGPT and it said..."
While it's true humans are fallible, it is more clear what someome should and should not get wrong, and we can make safer assumptions about the knowledge we get from certain individuals. ChatGPT doesn't "know" anything, you can get it to be right and wrong about the same detail in the same conversation. It's not clear what it should be correct about, and so it's a pretty poor source of information.