I guess it really depends on what the objective is. I'm not talking speculatively, but concretely it seems like a reasonable way to achieve a few images that they show in the press release. They show two relatively flatly rendered lots-of-tubes images. I know SSAO isn't the same, and I get that there's overdraw, but there are a lot of details in the particular objective they want. In one shot, they show a lot of emissive tubes with depth of field, which is harder to achieve. I suppose if they're happy, they're happy.
> interactive performance for all datasets on a regular Intel Xeon processor, which can render images at 20-25 frames per second (FPS)
There's a big difference between interactive performance and a production-quality render. Something tells me it's not producing 25 frames of noise-free render per second. There isn't enough information here.
> and a dual Xeon can match a single top-of-the-line GPU
At what, like 3x-5x the price? At how many watts? And at what I.T. complexity? A GTX 1080, at better performance than a Titan X, is really a phenomenally good deal. Especially considering I can drop it into an existing workstation with all of my existing software installed on it; especially considering I can rent out computation time on Amazon by the hour.
I guess what I'm reacting to is how forced of an example it seems.