Mitsuba is an open source research renderer with lots of cool features like differentiable rendering. https://www.mitsuba-renderer.org/
Maxwell has two spectral modes of varying accuracy. The more complex method is often used for optics. https://maxwellrender.com/
Manuka by Wētā FX is spectral and has been used in several feature films https://dl.acm.org/doi/10.1145/3182161 and https://www.wetafx.co.nz/research-and-tech/technology/manuka
Afaik most spectral rendering systems do not do (thin-film) interference or other wave-based effects, so that is another frontier. Reality has surprising amount of detail.
Even photorealism is a shifting target as it turns out that photography itself diverges from reality; there is this trend of having games and movies look "cinematic" in a way that is not exactly realistic, or at least not how things appear to human eye. But how scenes appear to human eyes is also tricky question as humans are not just simple mechanical cameras.
Another one that few implement, and which can have a quite noticeable effect in certain scenes, is polarization of light[1].
[1]: https://www.giangrandi.ch/optics/polarizer/polarizer.shtml
Beyond the effects shown here, there are other benefits to spectral rendering - if done using light tracing, it allows you to change color, spectrum and intensity of light sources after the fact. It also makes indirect lighting much more accurate in many scenes.
I spent some great time playing with the base implementation. Making the rays act as particles* that bend their path to/away from objects, making them "remember" the last angle of bounce and use it in the next material hit etc. Most of them looked bad, but I still got some intuition what I was looking at. Moving the camera by a notch was also very helpful.
A lot of fun, great for a small recreational programming project.
* Unless there's an intersection with an object, then set the maximum length of the ray to some small amount, then shoot many rays from that point around and for each hit apply something similar to the gravity equation. Of course this is slow and just an approximation, but it's easy and you can implement a "black hole" type of object that will bend light in the scene.
http://canonical.org/~kragen/sw/aspmisc/my-very-first-raytra...
since then i've written raytracers in clojure and lua and a raymarcher in js; they can be very small and simple
last night i was looking at Spongy by mentor/TBC https://www.pouet.net/prod.php?which=53871 which is a fractal animation raytracer with fog in 65 machine instructions. the ms-dos executable is 128 bytes
i think it's easy to get overwhelmed by how stunning raytraced images look and decide that the algorithms and data structures to generate them must be very difficult, but actually they're very simple, at least if you already know about three-dimensional vectors. i feel like sdf raymarching is even simpler than the traditional whitted-style raytracer, because it replaces most of the hairy math needed to solve for precise intersections with scene geometry with very simple successive approximation algorithms
the very smallest raytracers like spongy and Oscar Toledo G.'s bootsector raytracer https://github.com/nanochess/RayTracer are often a bit harder to understand than slightly bigger ones, because you have to use a lot of tricks to get that small, and the tricks are harder to understand than a dumber piece of code would be
It’s just a catchy title. You can implement the book in an hour or two, if you’re uncurious, or a month if you like reading the research first. Also maybe there are meaningful differences in the feature set such that it’s better not to try to compare the time taken? The Ray Tracing in One Weekend book does start the reader off with a pretty strong footing in physically based rendering, and includes global illumination, dielectric materials, and depth of field. It also spends a lot of time building an extensible and robust foundation that can scale to a more serious renderer.
It also reminds me of a time that I was copying code from a book to make polyphonic music on an Apple II. I got something wrong for sure when I ran it, but instead of harsh noise, I ended up with an eerily beautiful pattern of tones. Whatever happy accident I made fascinated me.
Perhaps a very-low-res in-browser renderer might be fast enough for interactively playing with lighting and materials? And perhaps do POV for anomalous color vision, "cataract lens removed - can see UV" humans, dichromat non-primate mammals (mice/dogs), and perhaps tetrachromat zebra fish.
[1] http://www.ok.sc.e.titech.ac.jp/res/MSI/MSIdata31.html [2] an inexpensive multispectral camera using time-multiplexed narrow-band illumination: https://ubicomplab.cs.washington.edu/publications/hypercam/
Until you encounter significant dispersion or thin film effects, that is, then you need to sample wavelengths for each path, so it becomes (even more of) an approximation.
I would randomly sample a frequency, calculate its color and use it to modulate ray color. I would have to scale the result by 3 to account for the pure refracted color being 1/3 brightness.
Yes when combining spectral rendering with refraction, you’ll need to pick a frequency by sampling the distribution. This can get tricky in general, good to build it in incremental steps. True of reflections as well, but up to you whether you want to have frequency-dependent materials in both cases. There are still reasons to use spectral even if you choose to use simplified materials.
https://geon.github.io/programming/2013/09/01/restructured-c...
If you mean raster rendering pipelines then I don’t believe it’s possible because the nature of the GPU pipelines precludes it. You’d likely need to make use of compute shaders to create it at which point you’ve just written a patthtracer anyway.
If you mean a pathtracer , then real-time becomes wholly dependent on what your parameters are. With a small enough resolution, Mitsuba with Dr.JIT could theoretically start rendering frames after the first one in a reasonable time to be considered realtime.
However the reality is just that even in film, with offline rendering, very few studios find the gains of spectral rendering to be worth the effort. Outside of Wētā with Manuka , nobody else really uses spectral rendering. Animal Logic did for LEGO movie but solely for lens flares.
The workflow change to make things work with a spectral renderer and the very subtle differences are just not worth the high increase in render time
One thing you'll run into is that there isn't a clear frequency response curve for non-visible, so you need to invent your own frequency to RGB function (false color).
Another thing is that radio waves have much longer wavelengths than visible, so diffractive effects tend to be a lot more important, and ray tracing (spectral or otherwise) doesn't do this well. Modeling diffraction is typically done using something like FDTD.
https://en.wikipedia.org/wiki/Finite-difference_time-domain_...
I'm no RF guy, but I imagine you quickly will have to care about areas where the wavelike properties of EM radiation dominates, in which case ray tracing is not the right tool for the job.
This reminds me of diagnosing bugs while writing my own raytracer, and attempting to map the buggy output to weird/contrived/silly alternative physics
All else being equal, if you carry a fixed-size power spectrum along with each ray that is more than 3 elements, instead of an rgb triple, then you really might have up to an n/3 perf. For example using 6 wavelength channels can be up to twice as slow as an rgb renderer. Whether you actually experience n/3 slowdown depends on how much time you spend shading, versus the time to trace the ray, i.e., traverse the BVH. Shading will be slowed down by spectral, but scene traversal won’t, so check Amdahl’s Law.
Problem is, all else is never equal. Spectral math comes with spectral materials that are more compute intensive, and fancier color sampling and integration utilities. Plus often additional color conversions for input and output.
Another way to implement spectral rendering is to carry only a single wavelength per ray path, like a photon does, and ensure the lights’ wavelengths are sampled adequately. This makes a single ray faster than an rgb ray, but it adds a new dimension to your integral, which means new/extra noise, and so takes longer to converge, probably more than 3x longer.