Fittingly, this article reads like a stoner's ramblings. It doesn't get into what exactly a "Super-Turing" machine is capable of that a Turing machine is not. Some googling turned up some theoretical possibilities (oracles and such) which do not appear to be physically buildable.
According to Google, it's third in the country. http://pages.cs.wisc.edu/~remzi/rank.html
Also, how do people not know this already? The results number in Google is wrong. It's just an inaccurate estimate designed to give people the "wow" factor when you search in Google.
Must have been some years ago, then, because Extravaganja has been held in Amherst town for a long while now.
I'm not great at interpreting technobabble, but I think it means everything in NP-Hard is now solvable in linear time.
Or something.
The more I look into it, the more this Super-Turing business sounds like the computability-theory version of FTL neutrinos. According to Martin Davis, Siegelmann's paper involves neural nets with arbitrary real-numbered (as opposed to integer or rational) weights being able to recognize languages on the alphabet {a,b} that a Turing machine can't.
So you have this thing that can get uncomputable results because it's given uncomputable inputs! In other words, no more interesting, or practical, than attempting to do computation with the n-body problem, and almost certainly nothing we can actually build.
http://binds.cs.umass.edu/papers/1995_Siegelmann_Science.pdf
I haven't had time to do anything more than skim it. My initial bogometer reading is not quite pegged, but it's close.
It appears to be a poorly executed and overstated attempt at (re-)discovering high-order inductive computational models. These are equivalent to Turing models unless you happen to have a hypercomputer at your disposal.
The endless parade of low-quality claims like this is why no one takes AI research seriously, even the quality work.
Really? You don't think modern AI research has produced any serious results? Do you know how many applications machine learning methods are used in today?
Of course in these cases the news reporters are to be blamed than the scientists who worked on the proble,.
So this device is an example of a Hypercomputer. Its existence would disprove the Chuch-Turing Hypothesis. It would also be very difficult to verify that it was actually calculating what it claimed [1]. As a hypercomputer it could by definition solve the halting problem. It is also more powerful than a quantum computer.
Simple argument: a turing machine can inefficiently simulate a quantum computer. By definition a turing machine cannot simulate a hypercomputer. It could hence compute incomputable functions. And since it could tell whether a program will halt it could compute Chaitins constant. One consequence of that is the resolution of twin prime, goldbach's conjecture and other open number theory problems. It should also be able to compute Solomonoff's Universal prior and hence act in a bayes optimal manner. Strong AI.
If what is claimed can be done then this is a very big deal.
Some other implicit or explicit arguments in this article:
- Human brain is more than a turing maching
- turing machine will not be able to realize AI
- "Classical computers work sequentially and can only operate in the very orchestrated, specific environments for which they were programmed."
I do not buy any of those arguments. And am not sure what that last quote is supposed to mean.
> such super-Turing capabilities can only be achieved in cases where the evolving synaptic patters [sic] are themselves non-recursive (i.e., non Turing-computable)
"Interactive Evolving Recurrent Neural Networks are Super-Turing", 2012, Jérémie Cabessa http://jcabessa.byethost32.com/papers/CabessaICAART12.pdf
So, create a neural network that changes after a non-Turing-computable pattern and its output might not be Turing-computable.
Hypercomputation models that depend on things like infinite-precision real numbers have been around for a while, including in Siegelmann's work, so I'm curious to know what specific advance is being reported here in "Neural computing".
I wonder if you could just build a ARNN and "fudge" the infinite precision with a reasonably large precision? Or with a big disk, you could compute and unreasonable amount of precision :) Or, store the infinite precision in a lazy way that only calculates as much as you need for a particular answer.