Now, instead of authoritative sounding humans who are possibly wrong or with an agenda, we have an "infallible", "impartial" oracle capable of inventing whatever it wants to. Yes, I have seen ChatGPT treated as infallible on this very forum: "you are wrong because ChatGPT says <insert nonsense here>". Can't wait to see LLMs taking "post-truth" to a whole new level. The propaganda potential is immense, to point out just one application.