It works best on Chrome, Firefox and Safari:
https://plus.google.com/u/0/+BenHouston3D/posts/DYq2RKJENC5
Here is another example:
https://twitter.com/exocortexcom/status/443538733661704192
I can only see raytracing becoming more popular.
The big win is around indirect lighting, but the development of that in standard rasterisers in the last decade has exceeded even my wildly optimistic expectations.
New options = good, but there's no such thing as a graphics silver bullet.
If raytraced scenes were hard to control, why would every single animated movie be raytraced?
Raytracing really is the be all and end all of graphics. The more rays you can render, the more realistic your scene will be, up to 100% realism.
Of course, we can come pretty far with the hacks as well, and it's hard to say if hardware raytracing can come close to the quality hardware shader hacks can achieve on traditional GPU's in real time.
I'm not entirely sure that's actually true.
In this paper (http://graphics.pixar.com/library/RayTracingCars/paper.pdf) Pixar talk about how the first movie they even tested using ray tracing on was Cars. Before that they were using scanline rendering.
And don't forget non-realistic graphics.
The problems of rendering are unlikely to become inherently easier—I don't think raytracing will ever be "end all of graphics". And while "every singly animated movie" might be raytraced to some extent, it's highly unlikely any of them use pure raytracing to achieve the effect.
That's a bit of a stretch. Pixar didn't use ray-tracing until Cars [2006]. Even then, it was only applied for secondary effects.
It wasn't until Monsters University [2013] that they produced the majority of a movie using ray-tracing [1]. That's a recent development, IMHO.
[1] http://thisanimatedlife.blogspot.com/2013/05/pixars-chris-ho...
Likely you're both right, and while it's harder to learn how to dink scenes and assets with ad-hoc rendering techniques than just use raytracing, an artist who has already mastered the black arts of tweaking everything just so is going to resent losing that control and having to let the physics take over.
[1] no really, you don't.
I don't see why it's harder for artists to control the results... can you elaborate? It seems like it's going to be 90% the same. You have a point on a surface, you have a camera vector and light vectors, you supply a small piece of code and out pops a color. The major differences in the pipeline are in the scaffolding: depth tests, reflections, and shadows will be done in different ways. The differences for artists are that they'll have to learn a system with different limitations.
My guess is that game graphics are going to follow the developments in movie CGI. Pixar switched to ray tracing in 2013, and games will switch when hardware powers up and expertise filters down.
If current-generation AAA games are any indication, either very few artists ever learned the raster workarounds or few artists ever had time to implement the workarounds.
I think ray-tracing will be a game changer. Raster shadows are easy to get subtly wrong and very difficult to get right. Cube-maps are easy to get subtly wrong and very difficult to get right. Transparency is easy to get subtly wrong and very difficult to get right. The list goes on.
> Even in AAA games, there are tons of artifacts in the shadows. Same goes for reflections.
Exactly! I can't count the number of times I've seen shadows pointing in the wrong direction, having the wrong color, having the wrong penumbra/antumbra, casting through solid objects, etc. Cube-map reflections are even worse (yay for faucet handles reflecting forrest scenes) especially when they're moving. Expect to see a reflection slide up the body of a car as it comes to a stop? If you're not in a car-racing game, forget about it.
All of those problems can be overcome with artist sweat and tears. The code has already been written and is in the big engines, but the effects still regularly fail to happen in AAA titles.
Ray-tracing makes it easy to do things right. None of the raster techniques have achieved that landmark. This WILL be a game changer.
[1] https://en.wikipedia.org/wiki/Bidirectional_reflectance_dist...
This demo was pretty impressive. I think they said they used 4 Titans at 720p.
In contrast to that technique you have "radiosity", where the illumination at each point on a surface is calculated based on its environment. Radiosity is better at rendering diffuse light, ray tracing is better at rendering reflected light.
The underlying difficulty is that illumination is a maddeningly combinatorial problem. In principle every part of every object in a scene contributes illumation to every part of every other object. The light falling on a desk lamp from the LEDs of a clock also illuminates the desk itself, and so on. Radiosity and ray tracing attempt to solve those problems by making simplifying assumptions and performing a subset of the calculations necessary to illuminate and render a scene completely faithfully. In principle a more proper ray tracing algorithm would be to send out a huge number of rays in every direction for every image field ray, then follow each of those through their evolution, sending out yet more for surfaces each fall on, and so on. But that's far too computationally intensive (it's an O(n!) level problem).
Path tracing is similar to these techniques but makes different simplifying assumptions. And the most important aspect is that instead of deterministically calculating everything in a specific way it uses a monte-carlo method of statistically sampling different "paths" then using the data to estimate the resulting illumination/image rendering. Path tracing results in a compromise between ray tracing and radiosity of being able to render both reflective and diffuse light well, though it has its own short-comings.
Much of the progress in photorealistic rendering has come from improving the techniques used to sample paths. Bi-directional path tracing samples paths by tracing paths from both the light and the eye and performing a technique called multi-importance sampling in order to weight them appropriately. This allows the algorithm to pick up more light paths that would otherwise be hard to reach, such as those which form caustic reflections.
One of the more recent developments on this front is the use of the "specular manifold", which allows a path sampler to "walk" in path-space that allows capturing of even more partially-specular paths. (See Wenzel 2013.) This technique allows efficient sampling of the main light transport paths in scenes where a specular transparent surface surrounds a light source, ex a light bulb casing.
Edit: for this specific hardware, it sounds like they are using a hybrid approach so may very well be doing a basic eye-to-light ray tracing algorithm and then using raster for diffuse surface shading.
While it is a O(n!) in relation to reflections a real physical rendering is 'mere' O(n) in relation to volume (or n^3 in relation to the sides of the cubic volume we want to simulate)! Why? We simply create a grid of around 200nm volumes and run a wave simulation for the light. Or O(nlogn) if we want to use fft based simulation.
Naturally this consumes horrendous amounts of memory and computational power simply because the size of spaces humans are interested in are so massive compared to wavelength of light.
But eventually we might move into this direction, maybe we even live to see it eventually.
It takes A LOT of work and planning to get light and shadow correct in a raster setting. Most AAA games don't bother. Raytracing makes it easy to get them right, which will make all the difference in the world.
I had a similar experience working a bit with Radiance years ago. The output looked mediocre, at best, but it was reporting a real view of the scene; it's call to fame is that it outputs real energy values in physical units (like energy per unit solid angle or luminance or such). But despite the cartoony models I provided, it did demonstrate subtleties that weren't hacks (needed for this application), things that you wouldn't think to or bother implementing directly.
This? This grabs a lot of needed visual artifacts faster and more correctly without crazy hacks (or so it seems - it does say hybrid rendering).
There's a reason Nvidia employs top CG talent for their hardware demos. A pretty demo sells more than an ugly one, even if they essentially pull off the same effect.
I've been interested in ray-tracing since the early '90s, and I'm glad it's finally coming to real-time, but this isn't going to "revolutionize" shit. It's going to make 3D games and VR slightly prettier than they were. It's not going to enable new styles of gameplay or new modes of interaction. We will never again see anything like the enormous forward leaps in realtime graphics that happened during the '90s.
I'm also a bit put off by their comparison showing that PowerVR has better reflections, shadows, and transparency than a raster engine with reflections and some shadows turned off and a very poor choice of glass-transparency filter.
On another look, I don't even know what they're going for with the shadows. The rasterized image has "NO SHADOWS" printed right between the shadows of a building and a telephone wire, and their hybrid render has the light from the diner windows casting shadows across outside pavement in broad daylight. Bwuh?
0: http://fabiensanglard.net/rayTracing_back_of_business_card/i...
If true, color me impressed!
I remember how slow the process was, it could take several hours/days to generate an image full of reflections, but in the end the results were usually stunning...
Link: http://www.povray.org/
Before it was renamed "povray", it was called "dkbtrace". I printed the entire dkbtrace source and studied it, to see how a real Ray Tracer worked; and through it, I learned how efficient vtbls work (it was 1990 C, and though C++ was already starting to become visible and popular, it was still "that new language that may or may not become popular" - dkbtrace implemented all the OO inside).
Thanks, D.K.B.
A nice blog about real time path tracing is: http://raytracey.blogspot.nl/
The comparison examples given in the article were slightly ridiculous. How does a non-reflective car represents 'traditional' rendering? Look at any great AAA game and you'll see reflections, refractions, radiosity, etc., that are all pretty amazing. I don't think that general demand will be there for alternative rendering hardware for quite a while.
Nope. I'd say the "raytracing pipeline" really does replace only the rasterization step, and once the triangle to draw is "found", you do whatever shading/lighting/fx/fragshader you want to draw it with? Wouldn't this be the most sensible approach?
Rasterization in lay-man's terms really is just "figure out which triangle, if any, is 'hit' at this pixel", so a historically much faster and very neat hack to avoid the tracing of a ray..
But you don't get cheap soft-shadowing / ambient occlusion / reflect-refract and you'd need to do occlusion culling seperately for current-gen "complex" scenes to avoid a drawcall for all kinds of hidden objects. That's where at some point raytracing as it becomes more feasible also becomes much more attractive. Potentially also reducing geometry-LODing headaches etc..
I can't imagine these ever being cheaper than ASICs.
Ugh. Never. Computing a hash won't be any faster on these GPUs.
Improvement in tooling and reuse are the only way we can actually properly use better rendering
Also, if they put FP64 support in their gaming cards they wouldn't be able to charge scientists and engineers extra.
Bingo. Not many entertainment problems demand such precision. Why not charge more for it?
allow me to disagree. the reflection is too green. probably not the hardware's fault, or is it?
[1] http://blog.imgtec.com/wp-content/uploads/2014/03/5_-PowerVR...