http://www.wired.com/2016/01/microsoft-neural-net-shows-deep...
They're showing you how to train different architectures simultaneously, and then compare their results in order to select the best one. That's great as far as it goes.
The drawback is that with this schema, you can't actually train a given network faster, which is what you want to do with Spark. What is the role of a distributed run-time in training artificial neural networks? It's easy. NNs are computationally intensive, so you want to share the work over many machines.
Spark can help you orchestrate that through data parallelism, parameter averaging and iterative reduce, which we do with Deeplearning4j.
http://deeplearning4j.org/spark https://github.com/deeplearning4j/dl4j-spark-cdh5-examples
Data parallelism is an approach Google uses to train neural networks on tons of data quickly. The idea is that you shard your data to a lot of equivalent models, have each of the models train on a separate machine, and then average their parameters. That works, it's fast, and it's how Spark can help you do deep learning better.
Say, you are training a NN to recognize handwritten characters 0 and 1, and you have 1000 training images for each character (so 2000 images in total). All images are bitmaps with 0 for black and 1 for white.
Now, by accident, all the "0" training-images have an even number of black pixels, and all the "1" training-images have an odd number of black pixels.
How do you know that the NN really learns to recognize 0's and 1's, as opposed to recognizing whether the number of pixels in an image is even or odd?
If you were using a deep network though, and if the current theory is correct, it would be a slightly different story. The current thinking, as I understand it, is that with deep networks, each layer learns representations of certain features (say "slashes", "edges", "right slanted lines", "left slanted lines", etc.) and the progressively higher layers learn representations composed from those more primitive features. So if a deep net were recognizing your handwritten characters, you could probably reason that it isn't just considering whether the number of black pixels is even or odd.
Now in reality this is a pretty contrived, and probably unlikely scenario. But it's a valid question, because there's a deeper point to all of this, which involves transference of learning. That is, how do you take the learning done by a neural network - trained to do one thing - and then leverage that learning in another application. We still don't exactly know how to do that, and that's in part because we don't entirely understand the nature of the representations the networks build up. So a very good answer to your question would arguably help understand how to do transference, which would make NN's even more useful.
It also goes without saying that 2k images is probably not going to be enough data to learn any meaningfully general feature representation.
There's often no way to know exactly what a neural network is doing, but sanity checks can catch most issues. Realistically, you wouldn't expect a neural network to perform with 100% accuracy, which would be a first clue in your example.
"There is a humorous story from the early days of machine learning about a network that was supposed to be trained to recognize tanks hidden in forest regions. The network was trained on a large set of photographs – some with tanks and some without tanks. After learning was complete the system appeared to work well when “shown” additional photographs from the original set. As a final test, a new group of photos were taken to see if the network could recognize tanks in a slightly different setting. The results were extremely disappointing. No one was sure why the network failed on this new group of photos. Eventually, someone noticed that in the original set of photos the network had been trained on, all of the photos with tanks had been taken on a cloudy day, while all of the photos without tanks were taken on a sunny day. The network had not learned to detect the difference between scenes with tanks and without tanks, it had instead learned to distinguish photos taken on cloudy days from photos taken on sunny days!"[0]
The pragmatic answer is that this is why you have two hold-out sets: cross validation/dev set and the test set. Typically you keep 70% of the data for training, 15% of the data for CV and 15% for Test. Ideally you should shuffle the data enough that there isn't any bias in the natural order of the data.
You train the model on the train data, and estimate how well the model actually performs on the CV set which the model did not see in training. You continue to use the CV set while you tweak parameters, try out new models etc. At this point you may have "cheated" a bit because you only kept things that worked well on your CV data. Finally when you say "this is done!" you try out your model on the Test data set.
Of course it's still possible that you would have the even/odd issue, and the answer to this whole set of issues is "healthy skepticism", and checking for these types of errors.
Take for example this Sentence Completion Challenge from Microsoft Research [1]
They claim some astounding results on correctly predicting GRE type questions using a very simple model (LSA for those who care). These results seemed impossible! But it turns out they cheated by training the model only on possible answers (which is akin to studying for the actually GRE by only review the possible answers that will be on the exam).
We tend to obsess over p-values and test validation scores as a substitute for reasoning. But all research papers should be read as an argument a friend is making to you, "I've done this incredible thing... ", and no single number should replace reasoned inquisition into possible errors.
https://www.ibm.com/developerworks/community/blogs/InsideSys...
http://www.theregister.co.uk/2011/02/21/ibm_watson_qa_system...
http://learning.acm.org/webinar/lally.cfm
http://www.cs.nmsu.edu/ALP/2011/03/natural-language-processi...
Of course, there's still a big gap between "Download some stuff" and "Build Watson", but at least there's a trickle of details on what happens in the "a miracle happens here" step. :-)
http://go.databricks.com/hubfs/notebooks/TensorFlow/Distribu...
http://go.databricks.com/hubfs/notebooks/TensorFlow/Test_dis...
You might've missed the section "How do I use it?" Maybe we should've made that section more obvious.