Teaching consumers bad habits.
http://corporatesolutions.thomsonreuters.com/investor-relati...
Many more examples (including several large companies) can be found at:
https://www.google.com/search?q=inurl:http://phx.corporate-i...
Corporate-IR (or whoever runs that domain) meets the criteria/is authorized for disclosure of information to investors.
Other companies use them too, for example NVIDIA: http://phx.corporate-ir.net/phoenix.zhtml?c=116466&p=irol-ir...
Also, very exciting that they're supporting GPU cloud rendering - that's going to be big for 3D.
My experience is that graphics card stats are a decidedly slippery fish as far as comparison goes.
However, a quick bit of Googling implies that this is almost identical, at least on paper, to a GeForce 770 or 680.
http://www.geforce.co.uk/whats-new/articles/introducing-the-...
Unfortunately, without knowing more details (clock speed, memory bandwidth) it's hard to say more.
Guess someone (possibly me) needs to benchmark 'em. :)
UPDATE - excellent info further down this thread: https://news.ycombinator.com/item?id=6678744
And, presumably, cracking hashes!
The current and previous generation Intel CPUs (Haswell and Ivy Bridge, respectively) have on-die GPUs which support OpenCL: http://software.intel.com/en-us/articles/intel-sdk-for-openc...
AMD's APUs are quite cheap (~$100) CPU+GPU designs similar to those in the upcoming PS4 and XBox One (though the retail APUs are somewhat less powerful). They've been more-or-less designed specifically around the needs of a heterogenous OpenCL application.
Finally, the last several generations of NVidia cards all support both CUDA and OpenCL; the newer cards do support additional features though. You should be able to pick up a low-end, recent-edition Nvidia GPU for roughly $100.
The new g2.2xlarge instances are $0.650/hour, and the existing cg1.4xlarge are $2.100/hour; so it may make sense to experiment on AWS a bit, then buy your own card for long-term use if you decide to spend more time doing GPU programming.
EDIT: Sorry for jumping on the Bitcoin hype train too soon! Many other uses for cracking hashes.
22:42:58:WU02:FS00:0x15:GPU memtest failure 22:42:58:WU02:FS00:0x15: 22:42:58:WU02:FS00:0x15:Folding@home Core Shutdown: GPU_MEMTEST_ERROR 22:42:58:WU02:FS00:0x15:Starting GUI Server 22:42:59:WARNING:WU02:FS00:FahCore returned: GPU_MEMTEST_ERROR (124 = 0x7c)
That goes a bit against the trend in web development to move much of the processing to the client side so i wonder where this will go.
Really high performance streaming of apps/games could revert the trend of making everything browser based in favor of streamed native apps.
For example if you have a good Radeon HD7970, you can get about .8 GH/s. Based on the rate of difficulty increase the 7970 would mine about 0.02 BTC in all of November 2013 and 0.01 BTC in December 2013 and < 0.01/month after that.
For various reasons Nvidia cards are slower at BTC mining than AMD. The fastest Nvidia card, the Tesla S2070, can only hash about 0.750 GH/s.
Even 60GH/s ASIC miners will be earning < 0.10 BTC per month by March 2014. In August 2013 a 60GH/s miner would make ~0.8 BTC PER DAY. That's how quickly the difficulty is increasing.
At this point no GPU would make a decent bitcoin miner, except as a hobby.