It kept looping on concepts of how AI could change the world, but it would never give anything tangible or actionable, just buzz word soup.
I think these LLMs (without any intention from the LLM)hijack something in our brains that makes us think they are sentient. When they make mistakes our reaction seems to to be forgive them rather than think, it's just machine that sometimes spits out the wrong words.
Also my apologies to the mods if it seems like i am spamming this link today. But i think the situation with these beetles is analogous to humans and LLMS
https://www.npr.org/sections/krulwich/2013/06/19/193493225/t...