I'm not missing that, I'm explicitly disagreeing that GPT shows evidence of this, and pointing out that human observers are mistakenly ascribing generalised intelligence to it because of some undeniably impressive, but explainable, results. The paper itself even opens with "GPT-4 is a Transformer-based model pre-trained to predict the next token in a document". I don't see any evidence of spontaneous development of intelligence, although I do think this work helps us get towards a deeper understand of the nature of intelligence itself, since a lot of what appears to be intelligent about GPT's behaviour is actually just the combination of a statistical model and an abundance of data, and perhaps that applies to humans too.
Also I would point out that emergent general intelligence would actually be quite an unsurprising result of deep learning for many people, given what we know about the human brain plus some hand-waving about emergent systems - I think many people actually expect something like that to happen, and that's exactly why so many people are jumping to that conclusion about GPT. It's confirmation bias.
But please enlighten me - where is the evidence that GPT-4 has generalised intelligence?