This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.
This is a genuine attempt to inform myself. Could you think to those sort of claims from experts at the top?
There were definitely many other prominent researchers who vehemently disagreed, e.g. Yann LeCun. But it's very hard for a layperson (or, for that matter, another expert) to determine who is or would be "right" in this situation - most of these people have strong personalities to put it mildly, and they often have vested interests in pushing their preferred approach and view of how AI does/should work.
After their success, I definitely saw a ton of blog posts and general "AI chatter" that to get to AGI all you really needed to do (obviously I'm simplifying things a bit here) was get more data and add more parameters, more "experts", etc. Heck, OpenAI had to scale back it's pronouncements (GPT 5 essentially became 4.5) when they found that they weren't getting the performance/functionality advances they expected after massively scaling up their model.