Like you realize humans hallucinate too right? And that there are humans that have a disease that makes them hallucinate constantly.
Hallucinations don’t preclude humans from being “intelligent”. It also doesn’t preclude the LLM from being intelligent.
Minority != wrong, with many historic examples that imploded in spectacular fashion. People at the forefront of building these things aren't immune from grandiose beliefs, many of them are practically predisposed to them. They also have a vested interest in perpetuating the hype to secure their generational wealth.
The ai can easily answer correctly complex questions NOT in its data set. If it is generating answers to questions like these out of thin air which fits our colloquial definition of intelligence.
"Is X true" -> "Yes, X is true."
"Is X a myth?" -> "Yes, X is a myth"
"Is Y a myth?" (where X = Y, rephrased) -> "No, Y is true"
Even when they're provided with all the facts required to reach the correct answer through simple reasoning, they'll often fail to do so.
Worse still, sometimes they can be told what the correct answer is, with a detailed step-by-step explanation, but they'll still refuse to accept it as true, continuing to make arguments which were debunked by the step-by-step explanation.
All state of the art models exhibit this behavior, and this behavior is inconsistent with any definition of intelligence.
They also dont have an internal world model. Well I don't think so, but the debate is far from settled. "Experts" like the cofounders of various AI companies (whose livelihood depends on selling these things) seem to believe that. Others do not.
So presumably we have a solid, generally-agreed-upon definition on intelligence now?
> autocompleting things with humanity changing intelligent content.
What does this even mean?
Because we can do this, by logic a universally agreed upon definition exists. Otherwise we wouldn’t be able to do this.
Of course the boundaries between what’s not intelligent and what is, is where things are not as universally agreed upon. Which is what you’re referring to and unlike you I am charitably addressing that nuance rather then saying some surface level bs.
The thing is the people who say the LLM (which obviously exists at this fuzzy categorical boundary) is not intelligent will have logical paradoxes and inconsistencies when they examine there own logic.
The whole thing is actually a vocabulary problem as this boundary line is an arbitrary definition given to a made up word that humans created. But one can still say an LLM is well placed in the category of intelligent not by some majority vote but because that placement is the only one that maintains logical consistency with OTHER entities or things all humans place in the intelligent bucket.
For example a lot of people in this thread say intelligence requires actual real time learning, therefore an LLM is NOT intelligent. But then there are humans who literally have anterograde amnesia and they literally cannnot learn. Are they not intelligent? Things like this are inconsistent and it happens frequently when you place LLMs in the not intelligent bucket.
State your reasoning for why your stance is "not intelligent" and I can point out where the inconsistencies lie.
Go check out anthropic's careers page and see just how few positions even require a formal training in statistics.
Meanwhile I don't see a lot of real statisticians who are that hyped about LLMs. More importantly, it feels like there aren't even that many scientists at the AI companies.
Your average programmer does not have nearly the "question your assumptions and test your beliefs" training that an actual scientist has, which is funny since nearly every bug in code is caused by an assumption you shouldn't have made and should have tested.
You're shocked because you hallucinated an assumption of something I never claimed.
Hallucinations? Does that sound similar to something?
A developer that hallucinates at work to the extent that LLMs does would probably have issues getting their PRs past code reviews a lot.
Because of this we should euthanize all schizophrenics. Just stab them to death or put a bullet in their heads right? I mean they aren’t intelligent or sentient so you shouldn’t feel anything when you do this.
I’m baffled as to why people think of this in terms of PRs. Like the LLM is intelligent but everyone’s like oh it’s not following my command perfectly therefore it’s not intelligent.
There are cases where humans lose all ability to form long term memories and outside of a timed context window they remember nothing. That context window is minutes at best.
According to your logic these people have no actual intelligence or sentience. Therefore they should be euthanized. You personally can grab a gun and execute each of these people one by one with a bullet straight to the head because clearly these people have no actual intelligence or sentience. That’s the implication of your logic.
https://en.m.wikipedia.org/wiki/Anterograde_amnesia
It’s called anterograde amnesia. Do you see how your logic can justify gassing all these people holocaust style?
When I point out the flaw in your logic do you use the new facts to form a new conclusion? Or do you rearrange the facts to maintain support for your existing conclusion?
If you did the later I hate to tell you this, it wasn’t very intelligent. It was biased. But given that you’re human, that’s what you most likely did and it’s normal. But pause for a second and try to do the former of using the new facts to form a different more nuanced conclusion.