It automatically parallelizes any large scale data processing pipeline (assuming that your data set is already broken into files). People use it for things like documentation generation. I've used it to process the results of web scrapers, large scale data cleaning tasks, etc, etc.
It takes about an afternoon to set up NFS + SSH primitives that automatically distribute the computation across clusters of machines. Since it is restartable, it automatically tolerates hardware faults (up to dozens of machines, in practice).
Basically, you get Map Reduce, but for arbitrary data processing DAGs, and it supports any language that works well on Unix-style operating systems.