“Meh, it’s just a fancy word predictor. It’s not actually useful.”
“Boring, it’s just memorizing answers. And it scored in the lowest percentile anyways”.
“Sure, it’s in the top percentile now but honestly are those tests that hard? Besides, it can’t do anything with images.”
“Ok, it takes image input now but honestly, it’s not useful in any way.”
There are two mistakes people make with this:
1) assuming this is the definite and final answer as to what AI can do. Anything you think you know about what the limitations are of this technology is probably already a bit out of date. OpenAI have been sitting on this one for some time. They are probably already working on v5 and v6. And those are not going to take that long to arrive. This is exponential, not linear progress.
2) assuming that their own qualities are impossible to be matched by an AI and that this won't affect whatever it is they do. I don't think there's a lot that is fundamentally out of scope here just a lot that needs to be refined further. Our jobs are increasingly going to be working with, delegating to, and deferring to AIs.
You will see skepticism until it is ubiquitous; for example, Tesla tech - it’s iterative and there are still skeptics about its current implementation.
It’s another to keep making wrong assertions and predictions about the pace of advancement because of a quasi-religious belief that humans with meat-brains are somehow fundamentally superior .
Intelligence and consciousness are at the fringe of our understanding, so this skeptical approach seems like a reasonable and scientific way to approach categorizing computer programs that are intended to be called “artificial intelligence”. We refine our hypothesis of “this is artificial intelligence” once we gain more information.
You’re free to disagree of course, or call these early programs “artificial intelligence”, but they don’t satisfy my crude hypothesis above to a lot of folks. This doesn’t mean they aren’t in some ways intelligent (pattern recognition could be a kind or degree of intelligence, it certainly seems required).
If you time travel back 50 years ago and told them in the future that a computer could ace almost any exam given to a high school student, most people would consider that a form of AGI.
Now, the goalpost has shifted to “It’s only AGI if it’s more intelligent than the totality of humans”.
If you haven’t heard anyone claim that we’ve made advances in AGI, you heard me here first: I think GPT3+ is a significant advancement in humanity’s attempts to create AGI.
That being said, in the field of machine learning there are significant things being achieved. I was wowed by DeepMind's AlphaZero and its achievements in 'teaching itself' and playing Go, at a level never seen before. I'm impressed by what Tesla is doing with self-driving. I'm less impressed by OpenAI's GPT-x because I don't think it's very useful technology (despite all the, imo, foolish talk of it doing away with all sorts of knowledge jobs and being able to 'tutor' ppl) but I do recognise that it also marks a step up in machine learning in the area of LLMs. None of this is 'Artificial Intelligence' however, and it is both silly and dangerous to conceptualise it as such.
What is the human brain then? I'm afraid you are bound to push so far that humans are no longer qualify as intelligent.
We also have extensive studies on all the ways we are actually really bad at processing input (a by-product of our primate ancestral heritage). There are entire textbooks on all of the different biases we have built-in. And there are clear and obvious limits to our perception, as well (I'm thinking of the five senses here).
Imagine you're neither constrained on the input side or the processing side of this equation. It becomes kind of a mathematical inevitability that we will be able to create artificial intelligence. When anything can be tokenized and act as an "input", and we can run that through something that can process it in the same way that our brains can, only scaled up 10-fold (or more)...
If there is one thing we're good at, it is thinking that we are the center of the universe. I think that is blinding people to the possibility of AI. We can't fathom it, for lots of good and bad monkey reasons.
Living in that sort of bubble must be very uncomfortable. Companies from virtually every category are pouring money in OpenAI starting with Microsoft. Just go and take a look at their partners and which field they belong to.
And remarkable that you cite Microsoft's involvement as some sort of standard of significance. A company that has a long history of non-innovation, alongside its disgraceful history of suffocating and extinguishing actual innovation. Founded by one of the most remarkably unimaginative and predatory individuals in the software industry. I'd suggest seeing Microsoft investing in anything is only a good sign of a potential future rort (Gates' whole history of making money).
I acknowledge and am mostly fine with the idea that machines can 'learn'. But they learn (the game of Go, navigating a car in the real world, etc) under our direction and training (even if they potentially go on to surpass our abilities in these tasks). They don't have any agency; they don't have any curiosity; they don't have any 'spirit of consciousness'; they are not intelligent. They have simply been trained and learnt to perform a task. It's a great mistake to confuse this with intelligence. And the field itself is acknowledging this mistake as it matures, with the ongoing change of nomenclature from 'Artificial intelligence' to 'machine learning'.
GPT is limited by its own design. The network is crude on the architectural level - which is easy to copy - but is only scaled to an unusual level - which is the factor behind the recent development. The current situation is almost like running BFS on a cluster during a chess match. Certainly, the AI will be able to beat human, but that can hardly change anything in real life, because it’s just BFS.
I find the real problem with AI is that there are people who freak out and extrapolate from select few examples. Meh, let GPT do that - because it can’t by design. We still have a lot of things to do until AIs become generally applicable.