But the arguments for both sides consist of a combination of outright lies and gross hyberbole. LLMs are undeniably new, and cool as hell. We killed the Turing test! Computers are now dramatically better at understanding human language than they were 5 years ago. And we are seeing improvement, perhaps not at the same rate we were 5 years ago, but still at an impressive clip. It's far from impossible that we'll have language models that, at the very least, can reliably (as in, you don't need to be constantly checking its work) parse out out human language for interfacing with more tradition computation within the foreseeable future.
On the other hand, it's definitely not AGI yet, and a machine that can't consistently be trusted to do a job is of inherently limited utility. Companies investing in the space are definitely burning money on a moonshot.
If it is natural like a human being when you talk to it, I hear that there are actually better models than LLM.
If the LLM says it is really able to parse the language, then things like halcynation do not happen.
As a computer engineer on the fringe, I am not inclined to trust the output of a general-purpose AI whose use case is still unknown over a powerful algorithm that reliably produces the same output with the same input.
I deliberately avoided the problem solving/doing work side of things in discussing modern AI, because that's a place where there is a significant amount of progress necessary to be useful. I completely agree that it's abilities in that regard, at present, are being grossly overblown. But the ability to parse human language, decipher intent, and synthesize responses in human language very much is a new capability that modern LLMs are extremely good at, and will likely reach reliability levels necessary for autonomous application in the very near future.
A program that can parse human language perfectly absolutely could still hallucinate. When I ask an LLM a question, and it makes up a patently false response, it accurately parsed what I asked of it. It just failed to synthesize a correct response. The parsing of human language and synthesis of information into human language is, in of itself, a powerful capability, that we shouldn't overlook just because it's no longer science fiction.
Eventually their runway will run out -- and, given their costs, that might be soon.
And I don't think there's anything wrong with that. Compared to most things that we burn respurces on, at least the AI investment has produced something that doesn't unambiguously make the world a worse place.