The author published an open source library -- RNNLIB [1] used for his neural network research but is the actual code for this hand writing demo published somewhere?
It's a rather cool attempt to draw the Mona Lisa using random, semi-transparent polygons
Edit: Roger Alsing's implementation was a single entity population (mutated then reverted if the mutation was no good). I copied this approach in my first implementation, but found that much better results could be achieved with a breeding population of genes.
Ohhh, interesting, thanks for posting this! I just started playing around with this myself a few days ago in Javascript (it has no UI so no link yet, but I uploaded some samples [0]), and it also uses the original simple approach. I wondered about an "actual" gene pool and cross-breeding, but shyed away from the additional effort for uncertain benefit... so this helps, greatly :)
One thing I intend to try is to get the fittest (in terms of likeness to target image), and then calculate the fitness of the other genomes by taking the difference (in terms of variables that determine the shapes) to that "champion". I see you take the two most fittest as is, maybe this could be useful for picking the second one?
Also, when the Mona Lisa thing was posted on HN, someone suggested marking areas of the target image as "more important", to maybe make facial features etc. more recognizable. I'll also see if making such a mask automatically, e.g. influenced by contrast, helps any.
Or the message itself could also be encrypted with a more secure system, but then physically presented in an open area so that somebody with a tuned recognizer can get the encrypted data to later decrypt digitally.
Take handwriting, the more illegible the better. Then use a genetic algorithm where the fitness function is trying to find as small a perturbation as possible to the input such that the output is recognized as the letters you want.
No. It couldn't be used to tech an OCR. Well, technically, it could, but all the OCR will learn is how to read text from this bot, not how to read text written by people.
If you averaged over all those sets, would the resulting blobby heatmap resemble the original word in a legible form? Or something else?
The demo is nice, though.
> Type over this text to prove that you are a computer.
> Human detected. Shoo, shoo!
To be really really useful, the OCR would need to consider at least all characters in the Unicode Basic Multilingual Plane. And then it needs to be able to reject an image as containing any word, and then it needs to solve the halting problem.