I think it's pretty clear this is a halo product, but I want to point out that having two non-crippled gpu's on a stick is an impressive technical achievement. Sure they will be throttled when the heat constraints kick in, but I am excited to see this sort of technology trickle down into the next "Asus 760 Mars" product.
For example, the Ford Shelby GT500 http://www.ford.com/cars/mustang/trim/shelbygt500/ is probably not even turning a profit for Ford, but when someone picks up an automotive magazine with an article about the car, it makes them more likely to buy the entry level Mustang at 40% of the cost, or any other Ford product.
http://en.wikipedia.org/wiki/Halo_effect#Halo_effect_and_bra...
The idea is the customer thinks: "Look at this amazing product Nvidia is producing! They must have great engineers working on bleeding edge technology!".
Even if they don't buy the $3000 GPU, they now associate Nvidia with high quality GPUs which may influence them to buy Nvidia product in the future.
Because they made what I thought was a decent phone at the time, they sold some other gear as a result.
http://www.theatlantic.com/technology/archive/2013/05/xbox-o...
Awesome.
Another fun one. At 8 Teraflops it makes it equivalent to the top supercomputer of 2000. So our desktops (in terms of raw computing capability) are only about 14 years behind supercomputers, and that gap is closing rapidly.
I haven't bought a new graphics card in 5 years because even next gen games play well enough. But I can see VR changing that with the need to render the same scene twice (one for each eye).
I wonder how much money you'd have to drop today to have a rig that can push 4K to each eye? There isn't a headset that can support that yet but I'd imagine its atleast in the next 5-10 years.
Seems silly.
That said, the selling point of the Titan cards is that their GPUs don't have the same restrictions put on their general-purpose compute performance as the standard gamer cards. NVIDIA locks this performance on their geforce cards in order to protect their lucrative GPGPU business, so this is really more of an entry-level card for scientific computing and other applications.
Cryptocurrency mining would be an obvious application, but a quirk of NVIDIA and AMD's differing architectures means that AMD cards are vastly more powerful at the specific functions needed to mine cryptocoins.
https://en.bitcoin.it/wiki/Mining_hardware_comparison
ASIC won't play skyrim @ 4k though... :)
Nvidia needs another card between the normal Titan and this one, that's a single card, and is targeted at VR gaming, and costs $1500 at most.
Intuitively, one GPU per eyeball sounds like a good fit.
(Although I am dreading having to type all those begins and ends.)
1. Single gtx titan gets 300 mh/s [0].
2. There are two gtx titan chips in this card, so let's say 600 mh/s.
3. A rack should fit around 60 of these. This gives us 36 gh/s (gigahashes a second).
4. According to this[1] calculator you will get a whooping 0.00361597 BTC per day, which would be worth around $2.12. Tomorrow expected payout drops to $2.09. Electricity should cost you at least an order of magnitude more.
[0] http://www.tomshardware.com/reviews/geforce-gtx-titan-perfor...
In fact, many of the top supercomputers today use GPUs.
Also, I would assume the reason behind supercomputers not being good for gaming is that they are parallelized in ways other than the chips on the GPU (entire machines are networked together via various interfaces). The software behind distributing the processing between several machines, or whatever aspect makes it super, is probably what limits the ability of supercomputers to run video games :0
The downside though is that it's extremely resource intensive. Even 16x MSAA antialiasing is faster than 2x supersampling. With supersampling, at 2x on 1080p you're rendering at 3840x2160 then scaling down to 1080p - effectively the same as gaming on 4k.
I have two rigs, one with dual r9 270s and another with SLI'd gtx 760s. Each can run dota 2 at 2560x1600 with 2x supersampling at around 40 and 30fps respectively.
The image quality is beautiful, don't get me wrong, but even those cards in an SLI isn't enough to push that amount of pixels.
With quad Titan Z's, you could probably do 4x supersampling - nearing the quality you'd get with source film maker, but in real time.
However, there is an extension of "multi monitor" that's probably going to become a big thing: VR gaming. VR gaming demands both more pixels and a consistent, high framerate, and could percolate more quickly down from "most hardcore of the hardcore" to "many people who enjoy gaming own one" than 4K displays or huge monitor stacks have.
This particular GPU is a multi-GPU single-card solution which is probably worse for VR, because alternate-frame rendering adds latency and can result in uneven frame timings especially when two sequential frames have dramatically different complexity.
Because I haven't found a single game that doesn't work or has SLI specific bugs. Crossfire however....
5K? "all the way to 11" or somebody just didn't proof-read the copy?