> Reason being that a calculation error that results in a dead pixel is fine when you are playing a video game. It'll be there for 1/60s and then be gone, never seen again. The same error in a disney film or a 3d scene rendered for a poster needs to be pixel perfect, so workstation class cards have a much lower threshhold for error, and cost accordingly.
I've heard this a lot, and I don't doubt that it's correct, but could you explain why? Like do consumer class GPUs have less accurate floating point, or do their embedded algos contain hacks to produce less accurate results faster?