Thank you for following up. I'll pedantically respond point by point, hopefully that will not make you bow out.
> Back in the days of lisp machines and expert systems, we started playing with early neural networks in an attempt to advance towards 'general intelligence' rather than the highly constrained talents of expert systems.
Interesting, you must be the first person I "meet" who was "back then and there". Still, if you allow me to point out for the second time, your "we" is really throwing me off because it sounds like "we the council of elders" and "we who self-appointed to determine what an actual true AI is". Positions of implied or directly claimed authority murder my motivation to take people doing them seriously. Hopefully that's a useful piece of feedback for you.
I would think that a random guy like myself who watched Terminator and that was part of his inspiration to become a programmer has just as much "authority" (if we can even call it that but I can't find the right word at this moment) to claim what a general AI should be. Since we don't have it, why not try and dream the ideal AI for us, and then pursue that? It's what the people who wanted humanity on the Moon did after all.
I feel too many people try to define general AI through the lenses of what we have right now -- or we'll have very soon -- and that to me seems very short-sighted and narrow-minded and seems like bronze-age people trying to predict what technology would be. To them it would likely be better carts that shake less while sitting in them. And faster cart-pulling animals.
That's how current AI practitioners trying to enforce their view on what we should expect sound to me.
> Meanwhile, -nearly every task- that we thought would represent "true intelligence" has fallen not to some magic AI algorithm, but rather to stochastic models like transformers, pattern matching, or straightforward computation. With no reason to expect otherwise, I expect this trend to continue unabated.
Sure, but this gets dangerously close to the disingenuous argument of "people who want AI constantly move the goalposts every time we make progress!" which is a stance I can't disagree with more even if I tried. I in fact hate this trope and fight it at every opportunity.
Why? Because to me that looks like AI practitioners are weaseling out of responsibility. It's in fact not that difficult to understand what the common people would want. Take a look at the "I, Robot" movie -- robot butlers that can do many tasks around the house or even assist you when you are outside.
What does that take? Let the practitioners figure it out. I, like yourself, believe LLMs are definitely not that -- but you are also right that it's likely a key ingredient. Being able to semi-informedly and quickly digest and process text is indeed crucial.
The part I hate is the constant chest-pounding: "We practically have general AI now, you plebs don't know our specialized terms and you just don't get it. Stop it with your claims that we don't have it! Nevermind that we don't have robot butlers, that's always in the future, I am telling you!".
And yes that happens even here in this thread, not in that super direct form of course, but it still happens.
> My original point was more that we expect machine general intelligence to be spectacularly useful. It may be, someday, but it is a kind of fallacy to think that "its not that useful therefore it must not really be intelligence".
Here we disagree. It's true that people want useful and don't care how it's achieved; as a fairly disgruntled-by-tech guy I want the same. Put rat brains in my half-intelligent but problem-solving butler for all I care; if it works well people will buy it en masse and ask zero questions.
...But I'd still put strong correlation between a machine being actually intelligent and being able to solve very different problems with the same "brain", and being useful. For the simple reason that a lot of our problems that don't require much intelligence at all have been mostly solved by now.
So it naturally follows that we need intelligent tools for the problems we have not yet solved. Would you not agree with that conclusion?
> Small animals also have limited utility, but with given names, language, tool use, and problem solving skills I think arguing that they do not exhibit "intelligence" would be a tough sell for me.
I agree. Some can actually adapt to conditions that their brain has not had to tackle in generations. But take koalas for example... I actually could easily call these animals not possessing a general intelligence and just being pretty complex bots reacting to stimuli and nothing else (though there's also the possibility that since they ingest such low nutrition food their brains constantly stay in a super low-energy mode where they barely work as problem-solving machines -- topic for another time).
> By my observation, we have made giant leaps in the past 15 years, and we now have perhaps the vast majority of the components required to make artificial general intelligence...but it won't necessarily be all that useful at first, except maybe as a "pet robot" or something like that. Even if we scale it, it might not magically get smarter, just faster. a million hyper-speed squirrels still has a very limited level of utility.
Agree with that as well, just not sure that the path to a better general AI is in scaling higher what we [might] have now. IMO the plateau that the LLMs hit quite quickly partially supports my hypothesis.
As you are alluding to, the path to general AI is to keep adding more and more components to the same amalgam and try and connect them in creative ways. Eventually the spark of artificial life will ignite.
---
To summarize, my problem with the current breed of AI practitioners is that they argue from a position of authority that to me is imaginary; they are working in one of the least clear areas in science and yet they have the audacity to claim superiority to anyone whereas to me it's obvious that a random truck driver might have more interesting ideas than them for their area (exaggerated example but my point is that they lack perspective and become too entrenched in their narrow views, I guess like all scientists).
Yes, LLMs are likely integral part of a future artificial and working brain that can solve general tasks. And no we will not get any further there. Throwing another trillion parameters will achieve nothing but even more elaborate hallucinations. To me it became blindingly obvious that without throwing some symbol logic in there the LLMs will forever have zero concept of what they're talking about so they'll never improve -- because they also rely on truthful sources of info. That's not problem solving; that's regurgitating words with some extra steps.
Time to move on to other stuff -- maybe the transformers? Speaking of which, do you have any material on them that you would recommend for a layman? Just a TL;DR what they do and roughly how? Obviously I can Google it in literal seconds but that sadly does not mean much these days -- so maybe you have a source that's more interesting to read.