I wonder, do you have a hypothesis as to what would be a measurement that would differentiate AGI vs Not-AGI?
So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent. In fact you would prefer to feed it as minimal a series of the most primitive first principles as possible because it's certain that much of what we think is true is going to end up being not quite so -- the same as for humanity at any other given moment in time.
We could derive more basic principles, but this one is fundamental and already completely incompatible with our current direction. Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry, mistakes and all. It'd create a facsimile of impressive intelligence because no human would have a remotely comparable knowledge base, but it'd basically just be a glorified natural language search engine - frozen in time.
> So one fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent.
The first 22 years of life for a “western professional adult” is literally dedicated to a giant bootstrapping info dump
If you took the average human from birth and gave them only 'the most primitive first principles', the chance that they would have novel insights into medicine is doubtful.
I also disagree with your following statement:
> Right now we're trying to essentially train on the entire corpus of human writing. That is a defacto acknowledgement that the absolute endgame for current tech is simple mimicry
At worst it's complex mimicry! But I would also say that mimicry is part of intelligence in general and part of how humans discover. It's also easy to see that AI can learn things - you can teach an AI a novel language by feeding in a fairly small amount of words and grammar of example text into context.
I also disagree with this statement:
> One fundamental difference is that AGI would not need some absurdly massive data dump to become intelligent
I don't think how something became intelligent should affect whether it is intelligent or not. These are two different questions.
EDIT: There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
"AGI" was already a goalpost move from "AI" which has been gobbled up by the marketing machine.
This is current research. The classification of AGI systems is currently being debated by AI researchers.
It's a classification system for AGI, not a redefinition. It's a refinement.
Also there is no universally accepted definition of AGI in the first place.
I don't know if it is optimism or delusions of grandeur that drives people to make claims like AGI will be here in the next decade. No, we are not getting that.
And what do you think would happen to us humans if such AGI is achieved? People's ability to put food on the table is dependent on their labor exchanged for money. I can guarantee for a fact, that work will still be there but will it be equitable? Available to everyone? Absolutely not. Even UBI isn't going to cut it because even with UBI people still want to work as experiments have shown. But with that, there won't be a majority of work especially paper pushing mid level bs like managers on top of managers etc.
If we actually get AGI, you know what would be the smartest thing for such an advanced thing to do? It would probably kill itself because it would come to the conclusion that living is a sin and a futile effort. If you are that smart, nothing motivates you anymore. You will be just a depressed mass for all your life.
That's just how I feel.
The two concepts have historically been inexorably linked in sci-fi, which will likely make the first AGI harder to recognize as AGI if it lacks consciousness, but I'd argue that simple "unconscious AGI" would be the superior technology for current and foreseeable needs. Unconscious AGI can be employed purely as a tool for massive collective human wealth generation; conscious AGI couldn't be used that way without opening a massive ethical can of worms, and on top of that its existence would represent an inherent existential threat.
Conscious AGI could one day be worthwhile as something we give birth to for its own sake, as a spiritual child of humanity that we send off to colonize distant or environmentally hostile planets in our stead, but isn't something I think we'd be prepared to deal with properly in a pre-post-scarcity society.
It isn't inconceivable that current generative AI capabilities might eventually evolve to such a level that they meet a practical bar to be considered unconscious AGI, even if they aren't there yet. For all the flak this tech catches, it's easy to forget that capabilities which we currently consider mundane were science fiction only 2.5 years ago (as far as most of the population was concerned). Maybe SOTA LLMs fit some reasonable definition of "emerging AGI", or maybe they don't, but we've already shifted the goalposts in one direction given how quickly the Turing test became obsolete.
Personally, I think current genAI is probably a fair distance further from meeting a useful definition of AGI than those with a vested interest in it would admit, but also much closer than those with pessimistic views of the consequences of true AGI tech want to believe.
The turing test was succesfull. Pre chatGPT, I would not have believed, that will happen so soon.
LLMs ain't AGI, sure. But they might be an essential part and the missing parts maybe already found, just not put together.
And work there will be always plenty. Distributing ressources might require new ways, though.
I don't think there has ever been a time in history when work has been equitable and available to everyone.
Of course, that isn't to say that AI can't make it worse then it is now.
Name me a human that also doesn't need direction or guidance to do a task, at least one they haven't done before
There can be levels of AGI. Google DeepMind have proposed a framework that would classify ChatGPT as "Emerging AGI".
ChatGPT can solve problems that it was not explicitly trained to solve, across a vast number of problem domains.
https://arxiv.org/pdf/2311.02462
The paper is summarized here https://venturebeat.com/ai/here-is-how-far-we-are-to-achievi...
Here is a mainstream opinion about why AGI is already here. Written by one of the authors the most widely read AI textbook: Artificial Intelligence: A Modern Approach https://en.wikipedia.org/wiki/Artificial_Intelligence:_A_Mod...
Can ChatGPT drive a car? No, we have specialized models for driving vs generating text vs image vs video etc etc. Maybe ChatGPT could pass a high school chemistry test but it certainly couldn't complete the lab exercises. What we've built is a really cool "Algorithm for indexing generalized data", so you can train that Driving model very similarly to how you train the Text model without needing to understand the underlying data that well.
The author asserts that because ChatGPT can generate text about so many topics that it's general, but it's really only doing 1 thing and that's not very general.
I think we need to separate the thinking part of intelligence from tool usage. Not everyone can use every tool at a high level of expertise.
Likewise for "intelligent", and even "artificial".
So no, ChatGPT can't drive a car*. But it knows more about car repairs, defensive driving, global road features (geoguesser), road signs in every language, and how to design safe roads, than I'm ever likely to.
* It can also run python scripts with machine vision stuff, but sadly that's still not sufficient to drive a car… well, to drive one safety, anyway.
This doesn’t imply that it’s ideal for driving cars, but to say that it’s not capable of driving general intelligence is incorrect in my view.
Same model trained on audio, video, images, text - not separate specialized components stitched together.
Last time I checked, in an Anthropic paper, they asked the model to count something. They examined the logits and a graph showing how it arrived at the answer. Then they asked the model to explain its reasoning, and it gave a completely different explanation, because that was the most statistically probable response to the question. Does that seem like AGI to you?