The idea, and this can be seen on DARPA's slides (http://www.darpa.mil/workarea/downloadasset.aspx?id=21474857...), is to get computation that is several orders of magnitude higher for their specialized sets of problems than what can theoretically be reached by traditional computing models even if Moore's law continues.
DARPA would like to first apply this technology to ARGUS drone systems (https://www.youtube.com/watch?v=QGxNyaXfJsA) and related technology because streaming video can't be done to the ground, tracking and decision making must be done on board - yet traditional processing platforms can only track a few orders of magnitude fewer targets that what the military would like.
In a more advanced phase, if memristor or coupled oscillator (etc) approaches to building inference models become possible, then programs written in DARPA's other initiative (Probablistic Programming) could be programmed into these exotic solid state devices to compute in a way more analogous to today's generic computation. And indeed, eventually the adoption of Probablistic Programming will train programmers to write code for quantum computers - while more complicated, replacing Probablistic Programming's PDFs with probability amplitudes almost get one there.
I hope to see more journalistic coverage of some of the other exotic devices.
And about journalistic coverage... you seem to be knowledgeable about these programs, so there's an opportunity for you :)
This is so amazing! Btw. I have found an HP Invent sign in my town, but the security guard didn't answer any questions about it. The only thing he said was that I won't find any address or telephone number for it. That made me curious, because HP is working on a memristor based Computer, but I doubt that they produce it in Germany.
They "backdoor listed" on to an Australian mining company, share price went from 1 cent to 27 cents: https://www.google.com/finance?cid=11163357
Valued at $57m.
Second, you can implement a convnet with a spiking circuit: http://link.springer.com/article/10.1007%2Fs11263-014-0788-3
I feel that I need to emphasize that. The memristor is the component that will easily allow for analog logic to occur at digital speeds and with digital logic type systems (very grossly speaking). What these little guys can do is under-sold.
There is a lesson from IBM trying to mimic a rat's brain -- that is you try to solve a problem rather than just burn power.
Better than what?
For example, to simulate a horse, is it necessary to create legs, or is it okay to build a road and use wheels?
The reason I mention it is that memristors are being accused of being vaporware when really they're in production already.
Wow - seems like a lot.
Human brain by comparison (sourced by google): - 12 watts - 100 billion neurons - 1000 trillion connections
Computing with memsisters is going to be very interesting.
The article cites 250 billion synapses per watt.
For the same 12 watts as a human brain eats up, a set of memristors could simulate three trillion synapses. A cat, in comparison, has 10 trillion. To get 100 trillion synapses, multiply those 12 watts by 33.3 to get 400 watts (the draw of nearly seven 60-watt incandescent bulbs).
Since one watt bags us 250 billion synapses and 400 watts is equivalent to a memristor-based human brain, then 400 cm^2 is the area needed to emulate the meekest of human minds.
That's nearly the same area as half of a medium Domino's pizza.
Certainly is... food for thought.
Just as a note: we are not sure whether that large computer really simulated brain activity or not. The tricky thing in brain research is that we have practically zero* idea about what matters and what can be omitted from the simulation. (For example, the glia cells seem to be important -- until recently, we have disregarded their role.)
So at this point even we had an infinitely big computer we could not simulate the brain properly because we don't know what exactly to simulate.
*zero means that there's much more we don't know than what we know.
Not really a fair comparison. That's like simulating one processor with another and complaining that 1 second of activity took 40 minutes to simulate. If we can implement the NN directly, rather than simulate it, we should expect a much smaller performance gap.
If your calculations are correct, that doesn't sound so bad. If we could implement a brain-sized (by number of neurons) neural network in the area as half of a medium Domino's pizza, that seems like a damn good achievement to me!
The numbers are interesting, though. 400 square centimetres sounds to me to be in the ballpark of a human brain (accounting for several layers).
If they can ever fabricate such a thing ... those neural networks are going to compute some scary stuff!
Do we have to pay extra for Austrian accent?