The more subjective the topic, the more volatile the user's state of mind, the more likely they are to gaze too deep into that face on the other side of their funhouse mirror and think it actually is their friend, and that it "thinks" like they do.
I'm not even anti-LLM as an underlying technology, but the way chatbot companies are operating in practice is kind of a novel attack on our social brains and it behooves a warning!