Go is a lot harder than you think for machines to play.
Translation still has many failure cases. Speech to text cannot yet handle intonation and auto-driving cannot yet handle driving in places like India. And reading then summarizing a page of a comic book while walking across a room is currently impossible.
I am not comparing translation or machine vision to AlphaGo, I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.
AlphaGo can beat the next best go playing bot purely using its neural net ensemble without using MCTS, for example. That's a pleasantly surprising result never before seen, to think that it can beat another bot without doing a single tree search during play and evaluation is also a testament to how impressive it is.
I did not. You said Go is in a lot of respect much more challenging than machine translation, speech to text and auto-driving. I merely pointed out that is wrong because the following exists: superhuman go player and the following do not: superhuman machine translation,speech to text and auto-driving. Go is a perfect information game with no shallow traps. Perfect information means unlike poker, information sets are not cross cutting and as such algorithms can leverage the fact that backwards induction is straightforward.
No shallow search traps and perfect information makes things a lot easier from a computational perspective. Driving at a superhuman level would require a sophisticated forward model from a physics perspective, before even considering predicting other drivers. Speech to text and fluent translation without brittle edge-cases requires hierarchical predictive models that capture long term correlations and higher order concepts. I'm not disputing Go is hard but the hurdles: high branching factor and no evaluation heuristic were the core difficulties. Training via reinforcement in a way that broke correlations which get in the way of learning and integrating roll out with the neural nets (breaking evaluation into value and policy as they did) was the Deepmind's team genius. The roll out and evaluation are what eat up so much electricity.
> The fact that it took the most elite ML lab in the world to engineer this solution using proprietary hardware never seen before that's orders of magnitudes faster at evaluating than what's available to the rest of us is a testament to how hard it is to beat Go.
AlphaGo can run on a GPU, just not (for now) as efficiently as on a TPU. Deepmind is indeed unmatched in output. AlphaGo did build on the 2006 breakthrough paper on tree based bandit algorithms. There was another important 2014 paper on the use of conv-nets on Go. Deepmind did amazing work, but it was not out of nowhere.
And, sure Go is hard. But from a computational perspective, it is still much easier than being able to run up a hill or climb a tree. Humans are just not very good at playing combinatorial games, so the ceiling is low.
> I am merely pointing out that it comes with a broad set of challenges that you aren't even aware of and is a lot harder thank you think.
That is absolutely untrue. I have a decent understanding of the implementation and a strong understanding of the underlying algorithms.
More to your point, my decades remark had a weaker notion of amateur. For each game, we've had something that could beat most humans for decades. But you're right, that's not a useful distinction.
If we look just at Go the decades remark is somewhat of a stretch. Go has been especially difficult, requiring more intelligent algorithms in order to solve branching and state evaluation (and the latter in particular, is a function too complex to fit in human consciousness).
But progress has been occurring for years. On 9x9 boards MCTS bots have been stronger than most humans since about 2007, 10 years ago. For 19x19 it's true, if we pick 4 or 5 dan as better than most amateurs then that's 6/7 years.
Humans take 10-20 years of semi-supervised learning to acquire this combination of common sense, knowledge, and problem-solving. It also happens in stages where the infants or especially young children have brains in overdrive taking in everything followed by stages that are more selective about what they take in and solidify. Training AI's to be smart for real, common sense and all, might take over a decade of data for the first one unless this problem can be decomposed. Still will take years of similar experiences.
https://arxiv.org/abs/1705.08807
With that said, the above is what the world's AI researchers think is possible hopefully within my lifetime using just applied AI without the notion of "common sense".
Common sense is AGI. That's not the goal anymore. The goal is to do things like self driving cars. Both Google and Tesla have placed vehicles on the road that have driven for literally millions of miles.
The idea is to build a bunch of classifiers and regression models and use them together in an ensemble to solve your problem. The same approach is being applied successfully to a lot of unrelated fields where deep learning is concerned.
Also, modern AI doesn't even pretend to be biological in nature, in fact we'll known researchers like Andrew Ng make a point in saying that they are only biologically inspires and that's where the commonalities end.
There are other models like HTM that are way more ambitious and want to come up with a single generalized scheme to solve a broad range of problems, AGI style. These guys think biology is important and are trying to emulate the neocortex. They ARE going for AGI, common sense, etc.
Replacing a human driver takes an AGI at least for the exceptional or new situations. It's why we're including it as a counter instead of supporting point.