>
Except there is no “capacity cap” on statistical models, we have no idea what they are or are not capable of yetWe do however have the knowledge that the human brain uses different model and topology, not just a bigger scale.
And we do have a good intuitition that scalling LLMs as they are (e.g. not changing the architecture) will give us more of the same kind of capabilities it currently has with the same limitations, not the kind we expect to match human thinking.
Also, empirically we do have an idea of "what they are or are not capable of yet". We had developed them, run them, and scaled them several times.