It is impressive and very unintuitive just how far that can get you, but it's not reductive to use that label. That's what it is on a fundamental level, and aligning your usage with that will allow it to be more effective.
But these endless claims that the fact they're "just" predicting tokens means something about their computational power are based on flawed assumptions.
The last time I had this discussion with people I pointed out how LLMs consistently and completely fail at applying grammar production rules (obviously you tell them to apply to words and not single letters so you don't fight with the embedding.)
LLMs do some amazing stuff but at the end of the day:
1) They're just language models, while many things can be described with languages there are some things that idea doesn't capture. Namely languages that aren't modeled, which is the whole point of a Turing machine.
2) They're not human, and the value is always going to come from human socialization.
So the argument goes: LLMs were trained to predict the next token, and the most general solution to do this successfully is by encoding real understanding of the semantics.
It's even crazier that some people believe that humans "evolved" intelligence just by nature selecting the genes which were best at propagating.
Clearly, human intelligence is the product of a higher being designing it.
/s
There's a branch of AI research I was briefly working in 15 years ago, based on that premise: Genetic algorithms/programming.
So I'd argue humans were (and are continuously being) designed, in a way.
After walking through a short debugging session where it tried the four things I'd already thought of and eventually suggested (assertively but correctly) where the problem was, I had a resolution to my problem.
There are a lot of questions I have around how this kind of mistake could simply just be avoided at a language level (parent function accessibility modifiers, enforcing an override specifier, not supporting this kind of mistake-prone structure in the first place, and so on...). But it did get me unstuck, so in this instance it was a decent, if probabilistic, rubber duck.
> it was a decent, if probabilistic, rubber duck
How is it a rubber duck if it suggested where the problem was?
Isn't a rubber duck a mute object which you explain things to, and in the process you yourself figure out what the solution is?