LLMs are not able to create new knowledge, only organize existing knowledge. Critically, we don't use induction to create new knowledge - but induction is all these LLMs can do. (e.g. To explain the origin of the universe, there is nothing we can induce, as the big bang is not observable.)
I don't see how training these AI models more will cause this property to emerge. But knowledge creation is the thing that we want when we say we want AGI.
The reason I describe AGI as such is because we only have one example of what an AGI is, namely humans. A popular idea is that machine AGI will look very different from human AGI (e.g. no consciousness or intrinsic motivation or qualia).
But this is a bold claim. It contends that there are multiple kinds of general intelligences vs some universal kind. Other universal concepts don't look like this. Consider the universality of computation: a computer is either Turing-complete or it isn't. There is no other kind of general purpose computer.
Once we created a Turing-complete computer, it could perform all conceivable computation. This is the power of universality. The first Turing-complete computer could technically play Call of Duty (though no one would have had the patience to play it).
It's not like there is some mode of ChatGPT that will produce AGI-level responses if given months or years of time to compute it.
Fundamentally, its ability to do whatever it is that AGIs do (AGI's version of Turing computation) is missing.