This is not "obvious" in any sense of the word. At best, it's highly debatable.
What definition do you have for intelligence and how do LLMs fail to meet it?
I just can't take the idea that there is ambiguity as to whether these things have general problem solving skills seriously. They obviously do.
As I asked up-thread, if I had a chat window open with you, what's something you would be able to say or do that an unrestricted ChatGPT wouldn't?
Does a dog or cat have intelligence?
If you answered no, then I would ask if you don't you believe that by some measure a dog or cat has more intelligence than a rock?
And as a follow-on I would ask if you think GPT demonstrates more intelligence than a dog or a cat.
But perhaps you believe that in every one of these examples there is not a single case where it "obviously has some form of intelligence."
(I am really trying to highlight the semantic ambiguities)
Just like chatbots 20 years ago didn't, even though they could talk, too.
Because from where I sit it's a distinction without a meaningful difference.
Sure, it behaves as if it has some form of intelligence in the sense that it can take external input, perform actions in reaction to this input, and produce outputs dependent on the input.
This historically has been known as a computer program.
Even GPT-4 it is easy to get it into a loop where it's just swapping one wrong answer for another. It doesn't act like it is intelligent - it acts like it is trying to predict the next text to display! Because that is what it is doing!