We have "real ai" already.
As for future progress, have you tried just simple interpolation of the progress so far? Human level intelligence is very near. (Though of course artificial intelligence will never exactly match human intelligence: it will be ahead/behind in certain aspects...)
- Simple interpolation of the progress is exactly the problem here. Look at the historical graphs of AI funding and tell me with a straight face that we absolutely must use simple interpolation.
- Nope, human-level intelligence is not even close. It remains as nebulous and out of reach as ever. ChatGPT's imitation of intelligent speech falls apart very quickly when you chat with it for more than a few questions.
I think we expect AGI to be much smarter than the average joe, and free of occasional stupidity.
What we’ve got is an 85IQ generalist with unreliable savant capabilities, that can also talk to a million people at the same time without getting distracted. I don’t see how that isn’t absolutely a fundamental shift in capability.
It’s just that we expect it to be spectacularly useful. Not like homeless joe, who lives down by the river. Unfortunately, nobody wants a 40 acre call center of homeless joes, but it’s hard to argue that HJ isn’t an intelligent entity.
Obviously LLMs don’t yet have a control and supervision loop that gives them goal directed behaviour, but they also don’t have a drinking problem and debilitating PTSD with a little TBI thrown in from the last war.
It’s not that we aren’t on the cusp of general intelligence, it’s that we have a distorted idea of how useful that should be.
Very shallow assessment, first of all it's not a generalist at all, it has zero concept of what it's talking about, secondly it gets confused easily unless you order it to keep context in memory, and thirdly it can't perform if it does not regularly swallow petabytes of human text.
I get your optimism but it's uninformed.
> To be fair, I’ve talked to a lot of people who cannot consistently perform at the mistral-12b level.
I can find you an old-school bot that performs better than uneducated members of marginalized and super poor communities, what is your example even supposed to prove?
> it’s hard to argue that HJ isn’t an intelligent entity.
What's HJ? If it's not a human then it's extremely easy to argue that it's not an intelligent entity. We don't have intelligent machine entities, we have stochastic parrots and it's weird to pretend otherwise when the algorithms are well-known and it's very visible there's no self-optimization in there, there's no actual learning, there's only adjusting weights (and this is not what our actual neurons do btw), there's no motivation or self-drive to continue learning, there's barely anything that has been "taught" to combine segments of human speech and somehow that's a huge achievement. Sure.
> It’s not that we aren’t on the cusp of general intelligence, it’s that we have a distorted idea of how useful that should be.
Nah, we are on no cusp of general AGI at all. We're not even at 1%. Don't know about you but I have a very clear idea what would AGI look like and LLMs are nowhere near. Not even in the same ballpark.
It helps that I am not in the area and I don't feel the need to pat myself on the back that I have managed to achieve the next AI plateau which the area will not soon recover from.
Bookmark this comment and tell me I am wrong in 10 years, I dare you.
This is honestly one of the most gpt-2 things I’ve ever read.
But it's a building block. And when used well it may be possible to get to zero hallucinations and good accuracy in question answering for limited domains - like the call center.
You shouldn’t use science fiction as your reference point. It’s like saying “where is my flying car?” (Helicopters exist)
And btw in the Terminator novelizations it was clearly stated that Skynet was a very good optimization machine but lacked creativity. So it's actually a good benchmark: can we create an intelligent machine that needs no supervision but still has limitations (i.e. it cannot dramatically reformulate its strategy in case it cannot win, which is exactly what happened in the books)?