It doesn’t work programmatically—that’s why it fails at logic. But it can reason inductively very very well. Do you have an example besides logic/math where it doesn’t understand simple concepts?
All the time. It often fails to understand simple concepts. It doesn't really seem to understand anything.
For example, try to get it to write some code for a program in a moderately obscure programming language. It's terrible: it will confidently produce stuff, but make errors all over the place.
It's unable to understand that it doesn't know the language, and it doesn't know how to ask the right questions to improve. It doesn't have a good model of what it's trying to do, or what you're trying to do. If you point out problems it'll happily try again and repeat the same errors over and over again.
What it does is intuit an answer based on the data it's already seen. It's amazingly good at identifying, matching, and combining abstractions that it's already been trained on. This is often good enough for simple tasks, because it has been trained on so much of the world's output that it can frequently map a request to learned concepts, but it's basically a glorified Markov model when it comes to genuinely new or obscure stuff.
It's a big step forward, but I think the current approach has a ceiling.
Is that really any different than asking me to attempt to program in a moderately obscure programming language without a runtime to test my code on? I wouldn't be able to figure out what I don't know without a feedback loop incorporating data.
>If you point out problems it'll happily try again and repeat the same errors over and over again.
And quite often if you incorporate the correct documentation, it will stop repeating the errors and give a correct answer.
It's not a continuous learning model either. It has small token windows where it begins forgetting things. So yea, it has limits far below most humans, but far beyond any we've seen in the past.
I don’t think its ability to program in an obscure program is really a great test. That’s a matter of syntax more than semantics, no?
Novel conceptual blends are where it excels. Yes, it needs to understand the concepts involved to blend them —but humans need that too.
It doesn't seem to have any "meta" understanding. It's subconscious thought only.
If I asked a human to program in a language they didn't understand, they'd say they couldn't, or they'd ask for further instructions, or some reference to the documentation, or they'd suggest asking someone else to do it, or they'd eventually figure out how to write in the language by experimenting on small programs and gradually writing more complex ones.
GPT4 and friends "just" take an input that seems like it could plausibly answer the request. If it gets it wrong then it just has another go using the same generative technique as before with whatever extra direction the human decides to give it. It doesn't think about the problem.
("just" doing a lot of work in the above sentence: what it does is seriously impressive! But it still seems to be well behind humans in capability.)
To the extent that humans have encoded the concepts into words, and that text is in the training set, to that degree ChatGPT can work with the words in a way that is at least somewhat true to the concepts encoded in them. But it doesn't actually understand any of the concepts - just words and their relationships.
I don't think this is the case at all. Language is how we encode and communicate ideas/concepts/practicalities; with sufficient data, the links are extractable just from the text.
But I suspect your notion of understanding is not measurable, is it? For you, chatGPT lacks something essential such that it is incapable of understanding, no matter the test. Or do you have a way to measure this without appeal to consciousness or essentialism?
Does understanding require consciousness? Maybe yes, for the kind of understanding I'm thinking of, but I'm not certain of that.
How do you measure understanding? You step a bit outside the training set, and see if whoever (or whatever) being tested can apply what it has learned in that somewhat novel situation. That's hard when ChatGPT has been trained on the entire internet. But to the degree we can test it, ChatGPT often falls down horribly. (It even falls down on things that should be within its training set.) So we conclude that it doesn't actually understand.
For example it cannot identify musical chords because despite (I presume) ample training material including explanations of how exactly this works, it cannot reasonable represent this as an abstract rigorous rule, as humans do. So I ask what is C E G and it tells me C major correctly, as it presumably appears many times throughout the training set, yet I ask F Ab Db and it does not tell me Db major, because it did not understand the rules at all.
In a sense though, this is just a logic problem.
I hate to break it to you, but humans aren't thinking logically, or exclusively logically. In fact I would say that humans are not using logic most of the time, and we go by intuition most of our lives (intuition is shorthand for experience, pattern matching and extrapolation). There is a reason of why we teach formal logic in certain schools....
It even told me about the role of each in a chord progression and how even though they share the same notes they resolve differently
Humans clearly don't think logically anyhow. Thats why we need things like abacus to help us concretely store things, in our head everything is relative in importance to other things in the moment
So it gave you a wrong answer and when you spelled out the correct answer it said "OK" x) Was that it? Or am I missing something.
The reason that chatGPT can write quantum computer programs to in any domain (despite the lack of existing programs!) is because it can deal with the concepts of quantum computing and the concepts in a domain (eg, predicting housing prices) and align them.
Very little of human reasoning is based on logic and math.
Can you be more specific? I literally don't know what you mean. What can you say about quantum mechanics that is not mathematical or logical in nature? Barring metaphysical issues of interpretation, which I assume is not what you mean.