The problem with Peter Norvig is that he comes from a mathematical background and is a strong defender the use of statistical models that have no biological basis.[1] While they have their use in specific areas, they will never lead us to a general purpose strong AI.
Lately Kurzweil has come around to see that symbolic and bayesian networks have been holding AI back for the past 50 years. He is now a proponent of using biologically inspired methods similar to Jeff Hawkins' approach of Hierarchical Temporal Memory.
Hopefully, he'll bring some fresh ideas to Google. This will be especially useful in areas like voice recognition and translation. For example, just last week, I needed to translate. "I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, because statistics and probabilities will never teach machines to "understand" language.
[1] http://www.tor.com/blogs/2011/06/norvig-vs-chomsky-and-the-f...
It's still flight, even if it's not done like a bird. Just because nature does it one way doesn't mean it's the only way.
(On a side note, multilayer perceptrons aren't all that different from how neurons work - hence the term "artificial neural network". But they also bridge to a pure mathematical/statistical background. The divide between them is not clear-cut; the whole point of mathematics is to model the world.)
Nobody knows how neurons actually work: http://www.newyorker.com/online/blogs/newsdesk/2012/11/ibm-b.... We are missing vital pieces of information to understand that. Show me your accurate C. Elegans simulation and I will start to believe you have something.
Perhaps in a hundred years, this is the argument: for several hundred years, inventors tried to learn to build an AI by creating artificial contraptions, ignoring how biology worked, inspired by an historically fallacious anecdote about how inventors only tried to learn to fly by building contraptions with flapping wings. It was only when they figured out that evolution, massively parallel mutation and selection, is actually necessary that they managed to build an AI.
If you think they are insufficiently accurate, submit a pull request.
To quote Jeff Halwkings "This kind of ends-justify-the-means interpretation of functionalism leads AI researchers astray. As Searle showed with the Chinese Room, behavioral equivalence is not enough. Since intelligence is an internal property of a brain, we have to look inside the brain to understand what intelligence is. In our investigations of the brain, and especially the neocortex, we will need to be careful in figuring out which details are just superfluous "frozen accidents" of our evolutionary past; undoubtedly, many Rube Goldberg–style processes are mixed in with the important features. But as we'll soon see, there is an underlying elegance of great power, one that surpasses our best computers, waiting to be extracted from these neural circuits.
...
For half a century we've been bringing the full force of our species' considerable cleverness to trying to program intelligence into computers. In the process we've come up with word processors, databases, video games, the Internet, mobile phones, and convincing computer-animated dinosaurs. But intelligent machines still aren't anywhere in the picture. To succeed, we will need to crib heavily from nature's engine of intelligence, the neocortex. We have to extract intelligence from within the brain. No other road will get us there. "
As someone with a strong background in Biology who took several AI classes at an Ivy League school, I found all of my CS professors had a disdain for anything to do with biology. The influence of these esteemed professors and the institution they perpetuate is what's been holding the field back. It's time people recognize it.
The Chinese Room experiment doesn't show only that. It also shows how important is the inter-relationship that exists between the component parts of a system.
We're reducing the Chinese Room to the Chinese and the objects they are using such as a lookup table. But what we're missing is the complex pattern between the answers, the structure and mutual integration that exists in their web of relations.
If we could reduce a system to its parts our brains would be just a bag of neurons, not a complex network. We'd get to the conclusion that brains can't possibly have consciousness on account that there is no "consciousness neuron" to be found in there. But consciousness emerges from the inter-relations of neurons and the Chinese Room can understand Chinese on account of its complex inner structure which models the complexity of the language itself.
Honestly I imagine we'd find more out from philosophers helping to spec out what a sentient mind actually is than we would from having biologists trying to explain imperfect implementations of the mechanisms of thought.
this is irrelevant
This is like saying a computer using an x86 processor is different, from the point of view of the user than an ARM computer, beyond differences in software
Or like saying DNA is needed for "data storage" in biological systems and not another technology
Sure, you can get inspiration from biology, but doesn't necessarily mean you have to copy it.
""I need to meet up" to Chinese. Google translates it to 我需要满足, meaning "I need to satisfy". This is where statistical translations fail, "
It's not really a fault of statistical translations (more likely quality of data issue), even though it has its limitations. Besides, google translation has been successful exactly because it's better than other existing methods (and Google has the resources, both in people and data to make it better)
Garbage in, garbage out! If you use 'I' in a sentence fragment when you mean to use 'We' then you can't really blame the translator for getting it wrong.
'We need to meet up' is a sentence with a completely different meaning from the incorrect and semantically confusing 'I need to meet up', it really does sound as if you need to meet up to some expectation.
If someone wants to attack Google's Chinese translation, it should be over snippets like 8十多万 or its failure to recognize many personal and place names which could easily be handled by a pre-processor. Google has never been competent in China in part because of their hiring decisions, but this isn't Franz Och's fault.
To avoid the wrath of the Google fan boys, a better example would have been the pinnacle of statistical AI : The category was "U.S. Cities" and the clue was: "Its largest airport is named for a World War II hero; its second largest for a World War II battle." The human competitors Ken Jennings and Brad Rutter both answered correctly with "Chicago" but IBM's supercomputer Watson said "Toronto."
Once again, Watson, a probability based system failed where real intelligence would not.
Google has done an amazing job, with their machine translation considering they cling to these outdated statistical methods. And just like with speech recognition has found out over the last 20 years, they will continue to get diminishing returns until they start borrowing from nature's own engine of intelligence.
Ken Jennings thought that a woman of loose morals could be called a "hoe" (with an "e", which makes no sense!), when the correct answer was "rake". Is Ken Jennings therefor inhuman?