- not simple SIMD. NVIDIA calls it SIMT (single instruction multiple thread), mostly since you can branch a subset of them, so for the programmer it does feel somewhat like threads.
- not just optimized for Graphics anymore. E.g. since Fermi, the Tesla cards have DP performance = 50% of SP - which has been specifically introduced for HPC purposes. They have also constantly improved the schedulers to go more into general purpose computing, e.g. Kepler 2 seems to support arbitrary call graphs on the device. Again, that's useless for graphics.
- suitable for pretty much all stencil computations. Even for heavily bandwidth bounded problems GPUs are generally ahead of CPUs since they have very high memory bandwidth. The performance estimate I use for my master thesis comes out at 5x for Fermi over six core Westmere Xeon for bandwidth bounded and 7.5x for computationally bounded problems.
HPC is all about performance per dollar, performance per watt - and (sadly) sometimes linpack results because some institution wants to be in the top of some arbitrary list. In all of these aspects GPUs come out ahead of x86, which has been very dominant since the 90ies. Which is why GPUs are now in 4 of the top 20 systems - each of those are hundreds of millions of dollars in investments. That wouldn't be done if they weren't suitable for most computational problems.