You're right, there is a lot of misinformation and hope about Hadoop out there, and I think there is a lot of value in Hadoop as a cheap data integration archive. But I think the parent poster's point still stands. A Hadoop-based infrastructure currently has a lot of impedance mismatch for full end-to-end advanced analytics with a bunch of stats, linear algebra, or graph stuff from native code which are not Java-based.
I would love to see a TCO analysis on Hadoop+analytics versus buying a more traditional "supercomputer" stack with infiniband or one of the nifty Cray/SGI NUMA systems. Current data warehouse and BI folks are fixated on cost per PB of storage, and Hadoop is very cheap based on that single metric. I suspect that if enough human factors and accuracy/agility of modeling results are considered, the latter may be quite cost effective. It's just that the "big iron" vendors are still in the middle of retooling their marketing for the BI/DW/ETL crowd. When they finally figure it out, it's going to be a bloodbath.
For instance, SGI UVs can give me 24TB-64TB of RAM in a single "system". I still have to make sure I do multithreading/multiprocessing well, but the interconnects are lower latency than 40GBe. https://www.sgi.com/products/servers/uv/
HP ProLiants now can fit 48-60 cores and 6TB in a single 4U system: http://www8.hp.com/us/en/products/servers/proliant-servers.h...
Buying a few of these scale-up systems is a LOT cheaper than hundreds of nodes of Hadoop sitting around maxing out I/O while their expensive Xeons have 10% CPU load. Especially given than you can hire anyone out of science/engineering grad school and they can program these scale-up systems, whereas writing a bunch of Java MR jobs for Hadoop is quite foreign to them.