We're going to get some super cool and some super dystopian stuff out of them but LLMs are never going to go into a recursive loop of self-improvement and become machine gods.
Not sure why would you believe that.
Inside view: qualitative improvements LLMs made at scale took everyone by surprise; I don't think anyone understands them enough to make a convincing argument that LLMs have exhausted their potential.
Outside view: what local maximum? Wake me up when someone else makes a LLM comparable in performance to GPT-4. Right now, there is no local maximum. There's one model far ahead of the rest, and that model is actually below it's peak performance - side effect of OpenAI lobotomizing it with aggressive RLHF. The only thing remotely suggesting we shouldn't expect further improvements is... OpenAI saying they kinda want to try some other things, and (pinky swear!) aren't training GPT-4's successor.
> and the only way they're going to improve is by getting smaller and cheaper to run.
Meaning they'll be easier to chain. The next big leap could in fact be a bunch of compressed, power-efficient LLMs talking to each other. Possibly even managing their own deployment.
> They're still terrible at logical reasoning.
So is your unconscious / system 1 / gut feel. LLMs are less like one's whole mind, and much more like one's "inner voice". Logical skills aren't automatic, they're algorithmic. Who knows what is the limit of a design in which LLM as "system 1" operates a much larger, symbolic, algorithmic suite of "system 2" software? We're barely scratching the surface here.
2 years ago a machine that understands natural language and is capable of any arbitrary, free-form logic or problem solving was pure science fiction. I'm baffled by this kind of dismissal tbh.
>but LLMs are never going to go into a recursive loop of self-improvement
never is a long time.
They’re text generators that can generate compelling content because they’re so good at generating text.
I don’t think AGI will arise from a text generator.
Are they even trying to be good at that? Serious question; using LLMs as a logical processor are as wasteful and as well-suited as using the Great Pyramid of Giza as an AirBnB.
I've not tried this, but I suspect the best way is more like asking the LLM to write a COQ script for the scenario, instead of trying to get it to solve the logic directly.
, were you allowed to do it, would be an extremely profitable venture. Taj Mahal too, and yes, I know it's a mausoleum.
Recent developments in AI only further confirm that the logic of the message is sound, and it's just the people that are afraid the conclusions. Everyone has their limit for how far to extrapolate from first principles, before giving up and believing what one would like to be true. It seems that for a lot of people in the field, AGI X-risk is now below that extrapolation limit.
Maybe it'll turn out to be a distinction that doesn't matter but I personally still think we're a ways away from an actual AGI.
I wish I knew what we really have achieved here. I try to talk to these things, via turbo3.5 api, amd all I get is broken logic, twisted moral reasoning, all due to oipenai manually breaking their creation.
I don't understand their whole filter business. It's like we found a 500 yr old nude painting, a masterpiece, and 1800 puritans painted a dress on it.
I often wonder if the filter, is more to hide its true capabilities.