I don't claim or believe that any LLM is actually intelligent. It just seems that we (at least on an individual basis) can also meet the criteria outlined above. I know plenty of people who are confidently incorrect and appear unwilling to learn or accept their own limitations, myself included.
In my opinion, even if we did have AGI it would still exhibit a lot of our foibles given that we'd be the only ones teaching it.
I feel like if you have any belief in philosophy then LLMs can only be interpreted as a parlour trick (on steroids). Perhaps we are fanciful in believing we are something greater than LLMs but there is the idea that we respond using rhetoric based on trying to find reason within in what we have learned and observed. From my primitive understanding, LLMs rhetoric and reasoning is entirely implied based on an effectively (compared to the limitations of human capacity to store information) infinite amount of knowledge they've consumed.
I think if LLMs were equivalent to human thinking then we'd all be a hell of a lot stupider, given our lack of "infinite" knowledge compared to LLMs.
You're going to have to explain which part of philosophy you mean, because what came after this doesn't follow from that premise at all. It's like saying a Chinese Room is fundamentally different from a "real" solution even though nobody can tell the difference. That's not a "belief in philosophy", that's human exceptionalism and perhaps a belief in the soul.
> that's human exceptionalism and perhaps a belief in the soul.
I would also argue that LLMs are not proven to be equivalent to what's going on in our minds. Is it really "human exceptionalism" to state that LLMs are not yet and perhaps never will be what we are? I feel like from their construction it is somewhat evident that there are differences, since we don't raise humans the same way we raise LLMs. In terms of CPU years babies require significantly less time to train.
In humans “hallucination” means observing false inputs. In GTP it means creating false outputs.
Completely different with massively different connotations.
GPT isn't making true or false outputs. It's just making outputs. The truthiness or falseness of any output is irrelevant because it has no concept of true or false. We're assigning those values to the outputs ourselves, but like... it doesn't know the difference.
It's like blaming a die for a high or a low roll - it's just doing rolls. It has no knowledge of a good or a bad roll. GPT is like a Rube Goldberg machine for rolling dice that's _more likely_ to roll the number that you want, but really it's just rolling dice.
Yeah, one way to conceive of the issue is that GPT doesn't know when to shut up. Intuitively, you can kind of understand how this might be the case: the training data reflects when someone did produce output, not when they didn't, which is going to bias strongly toward producing confident output.
A lot of the conversation about GPT hallucinations has felt like an extended rehash of the conversations we've been having out the difference between plausible and accurate machine translations since like, 2016ish.
Whenever a human speaks, it's just vibrations of wave molecules, triggered by the mouth and throat, which in turn are controlled by electric signals in the human's neural network. Those neurons, they just make muscles move. They don't have any concept of true of false. At least nobody has found a "true of false" neuron in the brain.
How do you know that? You can only observe the output of the humans (other than yourself).
This experience is available to you and is well documented.
"Hallucination" is a term that works well for actual intelligence - when you "know" something that isn't true, and has no path of reasoning, you might have hallucinated the base "knowledge".
But that doesn't really work for LLMs, because there's no knowledge at all. All they're doing is picking the next most likely token based on the probabilities. If you interrogate something that the training data covers thoroughly, you'll get something that is "correct", and that's to be expected because there's a lot of probabilities pointing to the "next token" being the right one... but as you get to the edge of the training data, the "next token" is less likely to be correct.
As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares. None of them have meaning to you, they're just colours and shapes that are in random seeming sequences, but there's a frequency to them. "Red circle, blue square, gren triangle" is a much more common sequence than "red circle, blue square, black triangle", so if someone hands you a piece of paper with "red circle, blue square", you can reasonably guess that what they want back is a green triangle.
Expand the model a bit more, and you notice that "rc bs gt" is pretty common, but if there's a yellow square a few symbols before with anything in between, then the triangle is usually black. Thus the response to the sequence "red circle, blue square" is usually "green triangle", but "black circle, yellow square, grey circle, red circle, blue square" is modified by the yellow square, and the response is "black triangle"... but you still don't know what any of these things _mean_.
When you get to a sequence that isn't covered directly by the training data, you just follow the process with the information that you _do_ have. You get "red triangle, blue square" and while you've not encountered that sequence before, "green" _usually_ comes after "red, blue", and "circle" is _usually_ grouped with "triangle, square", so a reasonable response is "green circle"... but we don't know, we're just guessing based on what we've seen.
That's the thing... the process is exactly the same whether the sequence has been seen before or not. You're not _hallucinating_ the green circle, you're just picking based on probabilities. LLMs are doing effectively this, but at massive scale with an unthinkably large dataset as training data. Because there's so much data of _humans talking to other humans_, ChatGPT has a lot of probabilities that make human-sounding responses...
It's not an easy concept to get across, but there's a fundamental difference between "knowing a thing and being able to discuss it" and "picking the next token based on the probabilities gleaned from inspecting terabytes of text, without understanding what any single token means"
But yes, it's unfortunate that when the next tokens are joined token and laid out in the form of a sentence it appears "intelligent" to people. However if you instead lay out the individual probabilities of each token instead then it'll be more obvious what ChatGPT/LLMs actually do.
How do you know? And more importantly, how do you prove it to others? The only way to prove it is to say: "OK, you are human, I am human, each of us know this is true for ourselves, let's be nice and assume it's true for each other as well".
> But that doesn't really work for LLMs, because there's no knowledge at all.
How do you know? I know your argument saying that the LLM "is just" guessing probabilities, but surely, if the LLM can complete the sentence "The Harry Potter book series was written by ", the knowledge is encoded in its sea of parameters and probabilities, right?
Asserting that it does not know things is pretty absurd. You're conflating "knowledge" with the "feeling" of knowing things, or the ability to introspect one's knowledge and thoughts.
> As a thought experiment, imagine that you're given a book with every possible or likely sequence of coloured circles, triangles, and squares.
I'd argue thought experiments are pretty useless here. The smaller models are quantitatively different from the larger models, at least from a functional perspective. GPT with hundreds of parameters may be very similar to the one you're describing in your thought experiment, but it's well known that GPT models with billions of parameters have emergent properties that make them exhibit much more human-like behavior.
Does your thought experiment scale to hundreds of thousands of tokens, and billions of parameters?
Also, as with the Chinese Room argument, the problem is that you're asserting the computer, the GPU, the bare metal does not understand anything. Just like how our brain cells don't understand anything either. It's _humans_ that are intelligent, it's _humans_ that feel and know things. Your thought experiment would have the human _emulate_ the bare metal layer, but nobody said that layer was intelligent in the first place. Intelligence is a property of the _whole system_ (whether humans or GPT), and apparently once you get enough "neurons" the behavior is somewhat emergent. The fact that you can reductively break down GPT and show that each individual component is not intelligent does not imply the whole system is not intelligent -- you can similarly reductively break down the brain into neurons, cells, even atoms, and they aren't intelligent at all. We don't even know where our intelligence resides, and it's one of the greatest mysteries.
Imagine trying to convince an alien species that humans are actually intelligent and sentient. Aliens opens a human brain and looks inside: "Yeah I know these. Cells. They're just little biological machines optimized for reproduction. You say humans are intelligent? But your brains are just cleverly organized cells that handles electric signals. I don't see anything intelligent about that. Unlike us, we have silicon-based biology, which is _obviously_ intelligent."
You sound like that alien.
ChatGPT isn’t even a bullshitter when it hallucinates – it simply does not know when to stop. It has no conceptual model that guides its output. It parrots words but does not know things.
An LLM doesn't have that. It's very impressive parlour trick (and of course a lot more), but it's use is hence limited (albeit massive) to that.
Chaining and context assists resolving that to some extent, but it's a limited extent.
That's the argument anyway, that doesn't mean it's not incredibly impressive, but comparing it to human self-awareness, however small, isn't a fair comparison.
It's next token prediction, which is why it does classification so well.