You seem to be ignoring my two other data points. Similarly, bandwidth costs are small, unless you're really pumping out a lot of data. I don't even notice our EBS transfer costs on our bill.
And how reliable a measure is UnixBench, when the top-performing server "CC2 Large" (by a factor of 25% over second place!) is a 2-core, 8GB RAM offering? It easily beats out all the two-dozen core, high-ram offerings below it.
Hell, the names of the AWS instances in that list aren't even correct. What's a "high-cpu medium"? They mean a "c1.medium" from looking at the stats page, which is now two generations obsolete - you have to know about them and go out of your way to provision one. The one name they do list, "m3.medium", is incorrectly labelled a "high i/o" VM; AWS doesn't have a "high i/o" VM, and the m3.medium is not considered by them to be network-, ram-, or storage-optimised, so I'm not sure where that's coming from. And if you do need disk i/o with AWS, you can provision reserved i/o (not very expensive), which needs to be accounted for in these comparisons. It's just getting my goat at the moment, because my comment was trying to argue against FUD, but that reference list can't even get well-known and advertised names correct.
AWS billing is complex, absolutely, but there is also a ton of flexibility, and it makes sense once you pass the learning curve. And micros do get throttled, but they also get a certain number of "throttle credits" that help them survive bursts. And yes, I agree that you should choose the right tool for the right job - one HNer really uses that huge amount of free bandwidth you get with the small DO servers with a media streaming service (I forget the handle). But that still doesn't change the fact that AWS is no longer "OMG expensive!" over DO.