I can interpret this in a couple different ways, and I want to make sure I am engaging with what you said, and not with what I thought you said.
> I think better LLMs won’t lead to AGI.
Does this mean you believe that the Transformer architecture won't be an eventual part of AGI? (possibly true, though I wouldn't bet on it)
Does this mean that you see no path for GPT-4 to become an AGI if we just leave it alone sitting on its server? I could certainly agree with that.
Does this mean that something like large language models will not be used for their ability to model the world, or plan, or even just complete patterns as does our own System one in an eventual AGI architecture? I would have a lot more trouble agreeing with that.
In general, it seems like these sequence modelers that actually work right is a big primitive we didn't have in 2016 and they certainly seem to me as an important step. Something that will carry us far past human-level, whatever that means for textual tasks.
To bring it back to the article, probably pure scale isn't quite the secret sauce, but it's a good 80-90% and the rest will come from the increased interest, the shear number of human-level intelligences now working on this problem.
Too bad we haven't scaled safety nearly as fast though!