AGI doesn’t mean smarter than the best humans.
What could you do at 12, with half of these advantages? Choose any of them, then give yourself infinite time to use them.
Many believe that AGI will happen in robots, and not in online services, simply because interacting with the environment might be a prerequisite for developing consciousness.
You mentioned boredom, which is interesting, as boredom may also be a trait of intelligence. An interesting question is if it will want to live at all. Humans have all these pleasure sensors and programming for staying alive and reproducing. The unburdened AGI in your description might not have good reasons to live. Marvin, the depressed robot, might become real.
We can't even define what consciousness is yet, let alone whats required to develop it.
Technically no, but practically...
12 year old limitations are: A. gets tired, needs sleep B. I/O limited by muscles
Probably there are more, but if 12 year old could talk directly to electric circuits and would not need sleep or even a break, then that 12 year old would be leaps and bounds above the best human in his field of interest.
(Well motivation to finish the task is needed though)
General intelligence would be like an impulsive 12 year old boy who could see 6 spatial dimensions and regarded us as cartoons for only sticking to 3.
I've seen some use "super" (as in superhuman) intelligence lately to describe what you're getting at.
But if one has {a, b, c} and the other has {b, c, d} neither is more or less intelligent than the other, they just have different capabilities. "Super" is a bit too one-dimensional for the job.
The Lem story "Golem XIV" concerns a machine which claims it possesses categorically superior intelligence, and further that another machine humans have built (which runs but seems unwilling to communicate with them at all) is even more intelligent still.
Golem tries to explain using analogies, but it's apparent that it finds communicating with humans frustrating, the way it might be frustrating to try to explain things to a golden retriever. Lem wrote elsewhere that the single commonality between Golem's intelligence and our own is curiosity, unlike Annie, Golem is curious about the humans which is why it's bothering to communicate with them.
Humans (of course) plot to destroy both machines. Annie eliminates the conspirators, betraying a hitherto un-imagined capability to act at great distance, and the story remarks that it seems she does so the way a human would swat a buzzing insect. She doesn't fear the humans, but they're annoying so she destroyed them without a thought.
I'm a bit tired of the hype surrounding LLMs, but all the same for very mundane and humbler tasks that require some intelligence modern LLMs manage to surprise me on a daily basis.
But it rarely accomplishes more than what a small collection of humans with some level of expertise can achieve, when asked.
Surely, the LLM models we have today are astounding by any measure, relative to just a few years ago.
But pronouncements of how this will lead to utopia, without introducing a major revision of economic arrangements, are completely, and surely intentionally/conveniently (Sam isn't an idiot) misleading.
Is OpenAI creating a class of stock so everyone can share in their gains? If not, then AGI owned by OpenAI will make OpenAI shareholders rich, very much to the degree its AGI eliminates human jobs for itself and other corporations.
How does that, as an economic situation, result in the general population being able to do anything beyond be a customer, assuming they can still make money in some way not taken over by AGI?
Utopia needs an actual plan. Not a concept of a plan.
The latter just keeps people snowed and calm before an historic level rug pull.