Excuse me, I was being snarky. I agree that we are low on information, but I guess we disagree on how low. You imply we have so little information the outcome is basically random, which I think is a bit too extreme.
I don't find it far-fetched to imagine current AI research - not just LLMs - to get us to AGI-likeness in years (5-10). I deliberately use "likeness", because I also think we will never agree on what it means and if we ever reach it.
Digressing here because no one will read it anyway, but if my knowledge of humanity is anything to go by, we will be surrounded by robots and software systems of varying levels of intelligence doing just about anything a normal human could do and we would still be discussing if we ever reach mythical AGI.
Personally I think abstract, pure form AGI is impossible, even in biological systems. We don't scale infinitely and I don't even mean in the physical "too-much-data-to-handle" sense, I mean qualitatively. I think there is a very real ceiling to what kinds of mentation are possible by us and thus what results are achievable. Again, not just quantitatively, but qualitatively.
We have examples of this in the animal realm. Some of them show signs of for all intents and purposes "general" intelligence. But they will never scale to human levels of cognition. Even simple human concepts are out of reach, yet they outperform us on other problems (maze solving, memory, etc). You cannot teach a great ape what you can teach our children. People have tried. They are thus general, because you can teach them just about anything within their "bandwidth", but they are still constrained. I very strongly suspect the same holds for us. It's just that we have no superiors (or even peers) to compare to.
I think future AI systems will doubt we are AGI.