That said, I have a small problem with the examples presented to say that already machines understand us :)
The article says 'For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request"
Let me try to take a shot at trying to explain that Siri did not "understand" your request.
Siri was waiting for a command and executed the best command that matched. Which is, make a phone call.
It did not understand what you meant because it did not take the whole environment into consideration. What if Carol was just in the other room. A human would maybe just shout "hey Carol, Thomas is asking you to come", instead of making a phone call.
If listening to a request and executing a command is understanding, then computers have been understanding us for a long time. Even without the latest advances in AI.
This is the crux of the matter. These voice recognition agents are trained with goal of accurately modelling a function that converts recorded sound to a series of words, and then act on those words to perform the most appropriate action. They are NOT trained to model the entire world, which is an incredibly complex task that no one has been able to formulate as a problem that computers can solve, yet. Humans on the other hand, have a machine that is extremely well-equipped to do just that - the brain. And that is exactly why humans are able to "understand" things, while we feel that machines are not, with our definition of "understand".
In the far distant future, if and when we do figure out a way to model the entire world, come up with suitable objective function, and solve it on a computer, there's no reason why that machine should be any less capable of understanding things than the average human.
We have a very specific set of evolved traits that define our understanding of the universe. A lot of that is social. So our "understanding" of the phrase "call Carol" includes a wide range of social cues about what that means, and your example is perfect: "call Carol" means that I want to talk to her, and that would be better done in person if possible, but that "if possible" has a more-or-less specific range of "if she's within earshot so I can yell for her", which is limited to the range of a human voice (but not the maximum range, like screaming, but just a normal yelling range). Which is less if the door is closed, or there's music playing, or Kevin is trying to nap in the other room. And not at all if we're in a library, or concert, or even a public space where yelling would draw attention. If "call Carol" has to include all of these to qualify for "understanding" then I think I know some people who fail at this test.
My go-to thought experiment on this is Dolphins. Dolphins are intelligent, have language, etc. But their understanding of the world must be so different. Trying to explain to a dolphin what "tripping someone up" means is going to be tricky. They may understand the words, but they'll never understand the concept.
We swim in a sea of social cues and non-verbal communication. We can program an AI to imitate more and more of this, and be aware of more of it, but it's like teaching dolphins about long-distance running. It's never going to come naturally. And they're never going to evolve that understanding naturally (like we do as children) because it's not in their nature. We anthropomophise our machines a lot, and we assume that they'll grow (like children) to grok all of our social cues eventually, because our only experience of similar situations is, well, children. But they're just machines, designed for a single purpose. They're never going to grok this. They're never going to be "like us" and really understand all the social ramifications of "call Carol". At some point I think we're going to have to accept this, and say that the machine understands the phrase "call Carol" enough. TFA draws the line at the machine calling Carol, and that seems reasonable.
The classic analogue is of course the Chinese room argument: https://en.m.wikipedia.org/wiki/Chinese_room
If you could make a machine pass the Turing test it might be intelligent - but no one has, and it's debatable if it's even possible, and it's even more debatable if, hype notwithstanding, the Turing test is even a good test of human-equivalent intelligence, because it ignores side channels that are fundamental to human communication, including tone of voice, posture, and facial expression.
(Yes, people communicate over email/SMS. But no one communicates over email/SMS without an implied social context that hugely limits and simplifies the content of any conversation.)
It's not the "call Carol" problem that needs to be solved. It's the "understand the entire world context well enough to know how to call Carol without being told - which includes being able to research information that isn't already available, and also includes edge cases like 'We went to Carol's funeral last week' and 'Carol had her phone stolen yesterday' and 'Carol is flying to Australia and won't be receiving messages for another 12 hours" and "Carol prefers FaceTime to WhatsApp."
And so on.
Ultimately your toy machine has to show evidence that it understands the entire world and can learn about it like a human can - which includes being able to do original research that isn't a simple literal Google search, parse humour, understand emotional responses and common cultural references, and follow standard social protocols.
That's a much harder problem than having a vaguely plausible limited text-only conversation, whether it's in Chinese, English, or Swahili.
ISTM there's no more "understanding" involved in this than when I touch the Contacts icon on my screen, then "C", "A", "R", etc until Carol's entry is displayed, and then I touch the Phone icon to initiate a call.
The fact that the interface used was sound-waves that the device recognised as matching the keyword "call" and the contact-list entry "Carol", rather than my finger touching specific areas of the screen, may be a handy feature. Of course it's a triumph of signal processing, fuzzy recognition, etc. But there's no more "understanding" involved than in the touch-screen version of the action, or in typing a command and parameter into a terminal window.
I think this is a reasonable thing to say, in the limited way he has defined ‘understanding’. People forget what a titanic achievement that user interfaces that allow us to communicate our intentions to a computer and receive a relevant response actually are, whether it’s using a voice or clicking a button.
The problem with the hype is that we are nowhere close to building systems that understand anything.
All we've built are calculators on steroids so far.
For example, take the classical AI knowledgebase fragment, "bird is animal that flies". If I ask example of bird, it can say "eagle", and exhibit some understanding. We can then probe further and ask for a bird which is not an eagle. If it says "bat" or "balloon", it exhibits that it still doesn't understand birds quite right.
In particular, if the description is nonsensical and thus impossible to understand, we cannot give any examples.
This idea was really inspired by the study, where they asked people to recognize nonsensical and profound sentences, describing certain situation. The profound are the ones where you can create a concrete instance of the situation.
You've rigged this up to operationalize it for current digital machines.
"Understanding", "Intelligence", etc. is a feature of animals in their environment. We need to begin there; and that is what we are talking about.
We "understand" how to drive as a dog "understands" how to play fetch. Understanding is not ever going to be a trivial rule that some digital system may instantiate.
It will always require direct causal contact with an environment. In my view "understanding" is "competent play in a changing environment" -- ie., the ability to modify the environment as it changes in accordance with your goals.
This rough definition is inspired by work in animals to understand the role of the neocortex, and animal learning, and the role of consciousness therein. Roughly: consciousness is "perceptual and cognitive intelligence grappling with environmental change".
I am agnostic regarding that, as I don't think there is any evidence that they do not attempt to build models that are consistent representations of reality.
I am assuming, based on my own experience, they also have this "internal lightbulb" going on when they think they have built the correct model. But whether they are actually cognizant of it (self-aware), I have no idea. (I guess what I am saying is that understanding and self-awareness are two different things.)
On my reading list is "The proper treatment of events", a book which "studies the semantics of tense and aspect" within a formal framework of constraint logic programming[1]. There is other similar work in this area, like "Good-enough parsing, Whenever possible interpretation:a constraint-based model of sentence comprehension"[2].
[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.... [2] https://hal.archives-ouvertes.fr/hal-01907632/file/CSLP-Blac...
Question: What is an example of a bird? Answer: An egret. Question: What is another example? Answer: Canaries.
Seems to do fine. I don't really have a stop though, so it goes on making up new questions on it's own. Make of it what you will. Very few of the answers are correct or even coherent enough to be correct: https://hastebin.com/agululiqif.txt
I do like this one though:
Question: Who is the inventor of the English ham? Answer: Poor old Francis Bacon.
Above, I am talking in the narrow sense. So the fact that the model itself is wrong shouldn't be an issue. But in the broad sense, we could say that understanding is ability to convert between intensional and extensional (ostensive) representations (models) of the world. Finding an example from intensional representation is just one task that is required.
Edit: nvm, I think I found it : http://journal.sjdm.org/15/15923a/jdm15923a.pdf
But perhaps I wasn't clear, the study doesn't say this, but it was rather my own experience with the BS sentences in that study that led me to the observation that they have an empty set of examples if we take them as a constraint satisfaction problem of sorts.
> The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth. - Niels Bohr
And, in fact, it is my rule of thumb test if something is a profound truth.
On the other hand, machines still perform actions that one could call 'stupid'. When alphago was losing in the fourth match against Lee Sedol it would play 'stupid' moves. These were, for instance, trivial threads that any somewhat accomplished amateur go player would recognize in an instant and answer correctly.
Humans, and also animals, have a hierarchy in their understanding of things. This maps on brain structure too. Evolution has added layers to the brain while keeping the existing structure. In this layered structure the lower parts are faster and more accurate but not as sophisticated. Stupidity arises because of a lack of layeredness so when the goal of winning the game is thwarted the top layer doesn't have any useful thing to do anymore and it falls back on a layer behind that. For alphago pretty much the only layer behind its very strong go engine is the rules of go. So, even when it is losing it will never play an illegal move but it will do otherwise trivially stupid things. For humans there is a layer between these things that prevents them from doing useless stuff. For living entities this is essential for survival. You can be forgetful of your dentist appointment but it is not possible to forget to let your heart beat. It seems that this problem could be mended by putting layers between the top level algorithm and most basic hardware level such that stupid stuff is preempted.
I think this behavior is less 'stupid' than it appears. When human beings play Go, the points matter even to the loser, and everyone goes home when it is over. There is life outside of Go. To Alpha Go, Go is it's entire universe. Part of the way it was trained was competing against other instances of itself, a sort of Thunderdome where the loser doesn't get to continue existing, and doesn't contribute to future generations. To Alpha Go, defeat is death. The behavior we observe when losing is nigh-certain has a human equivalent, we call it desperation. Alpha Go is trying moves that can only possibly work if the opponent makes a catastrophic blunder, which is incredibly unlikely, but it's the only shot it has.
Google Search doesn't, but Google Assistant does. I posed the exact queries suggested by the article and the second query of simply the word "when" did give the correct answer (May 11 1997).
I wonder if now it would correctly take the previous context into account. Google has been working a lot on improving their search and assistants to be "conversational". [1] looks like one of the results of this endevour.
[1] https://cloud.google.com/dialogflow/docs/contexts-overview
It’s like saying “my calculator lets me type ’1 + 2 =’ and gives me the answer ‘3,’ so it seems to understand that question, but when I look at the calculator I see there’s no ‘sqrt’ button that would show me the square root of 3.”
The fact that my basic calculator doesn’t have a “sqrt” button is pretty irrelevant to how well it “understands” how to add two numbers together.
I think what they were trying to get at is that understanding is stateful.
For example, imagine a system that has as input the picture of a human face in RAW format. If the system runs the picture through JPEG compression, for example, and returns something substantially smaller, it has shown some understanding of the input (color, spatial repetition, etc).
A more advanced system, with more understanding, may recognize it as a human face, and convert it to a template like the ones used for facial recognition. It doesn't care about individual pixels anymore, or the lighting, just general features of faces. It understands faces.
An even more advanced system may recognize the specific person and compress the whole thing to a few bits.
I would say that an OCR scanner understands the alphabet and how text is laid out, GPT-2 understands the relationship between words and how text is written. And a physics simulator understands basic physics because it can approximately compress a sequence of object movements into only initial conditions and small corrections.
Lossy compression makes this concept non-trivial to measure, but it's still a world's away from the normal philosophical arguments.
If someone ask why you like ice-cream, you can tell a nice story about the hot summers during your childhood, but the reality is that sugar and fat are very useful.
If a the autopilot of a Tesla hit someone, the error report is "Fatal error 0xDEADBEEF: coefficient 742 > 812".
If a person hit someone the explanation is "It was dark and near a curve. I was texting that is totally safe. I got distracted by reindeer nearby. And I snoozed and was thinking about reaching a handkerchief".
Human understanding has been wrong often enough, missing enough crucial context to be dangerously hillariously wrong even amongst the "experts" of the day who came closest.
The isn't some epistemological nilhism but to point out that understanding is incomplete for everyone and just because a given intelligence subset doesn't match with our assumptions doesn't mean it is wrong - although it also isn't always right.
There are projects doing video and text understanding. I think the trick to efficient generalization is to have the representations properly factored out somehow. Maybe things like capsule networks will help. Although that my guess is that to get really sort of componentized efficient understanding neural networks are not going to be the most effective way.
This sounds a bit like a studying for a test taking. What if we made a definition and then worked successfully to reach the state when, according to this definition, the system "understands". Can we expect to be satisfied with the result in general, outside of the definition?
The definition of understanding could be tricky, as history suggests. Other than "to understand is to translate into a form which is suitable for some use", there could be many definitions. Article itself brings examples of chess playing or truck driving which were considered good indicators, yet failed to satisfy us in some ways.
Maybe we should just keep redefining "understanding" as good as we can today, and changing it if needed, and work trying to create a system "good", not necessarily "passing the test"?
But I have to disagree with this (because of course I do):
>> For example, when I tell Siri “Call Carol” and it dials the correct number, you will have a hard time convincing me that Siri did not understand my request.
That is a very common-sense and down-to-earth non-definition of intelligence: how can an entity that is answering a question correctly not "understand" the question?
I am going to quote Richard Feynman who encountered an example of this "how":
After a lot of investigation, I finally figured out that the students had memorized everything, but they didn’t know what anything meant. When they heard “light that is reflected from a medium with an index,” they didn’t know that it meant a material such as water. They didn’t know that the “direction of the light” is the direction in which you see something when you’re looking at it, and so on. Everything was entirely memorized, yet nothing had been translated into meaningful words. So if I asked, “What is Brewster’s Angle?” I’m going into the computer with the right keywords. But if I say, “Look at the water,” nothing happens – they don’t have anything under “Look at the water”!
https://v.cx/2010/04/feynman-brazil-education
In this (in?) famous passage Feynman is arguing that students of physics that he met in Brazil didn't know physics, even though they had memorised physics textbooks.
Feynman doesn't talk about "understanding". Rather he talks about "knowing" a subject. But his is also a very straight-forward definition of knowing: you can tell whether someone knows a subject if you ask them many questions from different angles and find that they can only answer the questions asked from one single angle.
So if I follow up "Siri, call Carol" with "Siri, what is a call" and Siri answers by calling Carol, I know that Siri doesn't know what a call is, probably doesn't know what a Carol is, or what a call-Carol is, and so that Siri doesn't have any understanding from a very common-sense point of view.
Not sure if this goes beyond the Chinese room argument though. Perhaps I'm just on a diffferent side of it than Thomas Dietterich.
I think the key ingredient is 'being in the game', that means, having a body, being in an environment with a purpose. Humans are by default playing this game called 'life', we have to understand otherwise we perish, or our genes perish.
It's not about symbolic vs connectionist, or qualia, or self consciousness. It's about being in the world, acting and observing the effects of actions, and having something to win or lose as a consequence of acting. This doesn't happen when training a neural net to recognise objects in images or doing translation. It's just a static dataset, a 'dead' world.
AI until now has had a hard time simulating agents or creating real robotic bodies - it's expensive, and the system learns slowly, and it's unstable. But progress happens. Until our AI agents get real hands and feet and a purpose they can't be in the world and develop true understanding, they are more like subsystems of the brain than the whole brain. We need to close the loop with the environment for true understanding.
It like saying that red-headed people doesn't have a soul - there is no way to disprove that assertion.
Does that seem dangerous to anyone else?
I also don't see any distinction between "qualia" and "soul" other than spelling, but perhaps it's because I don't have one.
Finally, I have this question for Searle: Say you understand English. Does any specific neuron in your brain understand English? No, the larger system of neurons+neuronal connections does, so why doesn't the system of grad student+book understand Chinese?
All it shows is that after hundreds of years, we still don't know how to explain or quantify human consciousness.
Qualia is generally argued by Sam Harris to be simple or reductionist elements of our human experience we can all agree humans share. Burning your finger on a hot stove and recoiling is a conscious experience every human shares.
The soul includes way more ideas and depends on who you talk to. The word has been overloaded a bunch, but generally can be said to include a higher spiritual aspect.
I have also somewhat responded before to Chinese room argument with this comment: https://news.ycombinator.com/item?id=20864005
The idea that a new self- sustaining meaning generation can arise out of the interlocking mechanisms of a computer is an interesting one. As we see self driven car CEOs describe some of the most advanced systems we have, requiring to be run in controlled environments and balking at the infinite complexity of real life, are we really building computer systems that are anything more than an incredibly sophisticated loop?
My point is that humans are also highly-sophisticated, biological machines, so if you say machines cannot "understand", you are making the same claim for humans as well.
Making the claim about what a human is in the absolute, is more about what you fill the unknown with than the nature of a human.
Understanding is the difficult question. I would argue the understanding people want out of machines is the ability to generate, use and self-manage tools and that the machine knows the tool's place or context under a human value, story or intent and adapt to the implications of that higher order. That in the most exaggerated sense would be perceived as a machine that understands, but of course people mean different things when they say that.