If you mean everything above the graphics API, e.g. modern lighting and shading techniques, there is a wide range of choices. PBR-inspired material response models are now dominant, and I recommend Naty Hoffman's slide deck as an excellent overview of the subject [0]. I can go into more details here if wanted. Haines's Real-Time Rendering is a pretty good reference. Get familiar with math, geometry and trigonometry. I recommend "3D Math Primer for Graphics and Games" by Parberry and Dunn [1] if you need a refresher.
[0] https://blog.selfshadow.com/publications/s2015-shading-cours... [1] https://www.amazon.com/Math-Primer-Graphics-Game-Development...
For rendering rates measured in seconds per frame, consider the "Physically Based Rendering Textbook", currently on its 3rd edition. https://www.pbrt.org/
A much shorter book with breadth is "Foundations of 3D Computer Graphics" by Steven J. Gortler.
For advanced techniques:
- GPU Pro and GPU Zen series (ed. Engel)
- Ray Tracing Gems (Haines et al.)
- Unreal Engine tutorials and source code for reference implentation
I want to build (or use) a custom 3D rendering engine that:
+ Renders from center of screen, in clockwise spiral outward, if any pixels remain after 15ms I want to drop them (should only be peripheral, I'm OK with that in exchange for constant performance time).
+ Can add secondary orthogonal clipping triangles to each triangle, such that the clipping edge could be defined by a curve. Imagine a hexagon (made of 6 triangles), we could easily extend a low poly hexagon into a high resolution circle just by interpolating a curve here.
+ My goal after that is to then define an animation pipeline, where it will procedurally generate thousands of fluid dynamic simulations that it then feeds back into itself as a machine learning training set. Once it has associated these slow compute intensive simulations with the model, ideally we can swap in high-resolution smoke/fluid graphics in-game by only using a few low resolution particle "marker" calculations that then trigger substitutions for the model.
And much more.
I'm OK if this takes me 9 years to make. My target is Rust+WebGL, inspired by the really impressive work of MakePad.
I'm also willing to spend money on people who can teach me this, so please contact me if you're an expert on Shaders and low level WebGL experience.
> Renders from center of screen, in clockwise spiral outward, if any pixels remain after 15ms I want to drop them (should only be peripheral, I'm OK with that in exchange for constant performance time).
I suppose you could split up your CPU-side render calls into segments and drop the ones that occur beyond what you measure as 15ms.
However, this means that you need to wait for the GPU to finish work after each segment, which has some overhead. It's probably better to figure out how much you can fit into 15ms ahead of time and then submit all that in one go.
> Can add secondary orthogonal clipping triangles to each triangle, such that the clipping edge could be defined by a curve. Imagine a hexagon (made of 6 triangles), we could easily extend a low poly hexagon into a high resolution circle just by interpolating a curve here.
This kind of variable-length output problem is something that GPUs aren't very good at. You could do it with Geometry Shaders, but those tend to perform poorly and aren't available in WebGL (or Apple Metal).
It might be better to just submit a sufficient amount of vertices, which you can analytically displace, so for example you don't have to supply the coordinates of every vertex on the circle, but rather calculate them from the index on-the-fly. You can also use Tesselation Shaders to amplify geometry in an efficient and view-adaptive way, but again that's not in WebGL.
Having said that, all of this "smart rendering" might not be necessary at all. After all, your goal is to train a neural network. You can probably just use an off-the-shelf renderer with off-the-shelf hardware and it'll be fast enough to not be the bottleneck - especially nine years down the line.
GPUs get their performance from parallelism, and it has been a core defining feature that GPUs render pixels in 2x2 pixel squares calls "quads". This is a requirement for the traditional GPU graphics programming model. The GPU programming model does not mandate any sort of pixel ordering, and doing that will kill performance.
> Can add secondary orthogonal clipping triangles to each triangle, such that the clipping edge could be defined by a curve.
GPUs deal with triangles. Period. You cannot get arbitrary curved geometry. You could do this at the pixel shader, discarding pixels that don't meet the curve profile, but that's hard to make 3D, and you'll be relying on post-style AA (like FXAA), or analytic AA and blending. Maybe that's fine for you. But flatly declaring "I need to be able to do curved geometry" is refusing the last 25+ years of GPU development.
> Once it has associated these slow compute intensive simulations with the model, ideally we can swap in high-resolution smoke/fluid graphics in-game by only using a few low resolution particle "marker" calculations that then trigger substitutions for the model.
Realistic smoke/fluid graphics can exist (see NVIDIA's Flow), but you don't tend to see them in games because games prefer techniques that mesh with the lighting, art style, and interact with the environment (smoke that respects the ceiling, fluid that collides with the floor). For some meager effects it's possible to do collision with the depth buffer. The simulation part is honestly the easy part.
Your clockwise idea doesn't make a ton of sense to me. You typically have a depth buffer enabled in 3D and draw your scene in a front to back fashion. This lets you save on evaluations of the fragment shader for meshes that are occluded.
I guess you could sort draws in your clockwise spiral and synchronize after each one, but that is going to be a massive performance drag since the GPU can't pipeline draws anymore.
See also
https://webgl2fundamentals.org/
And for state-of-the-art check out the Pixar Moana Island Scene and Disney's Hyperion Renderer (although perhaps that is on its way to becoming obsolete!)
https://www.yiningkarlli.com/projects/hyperiondesign/hyperio...
https://www.technology.disneyanimation.com/collaboration-thr...
A very good practical hands-on bottoms up [2] resource.
[1] https://fabiensanglard.net/Computer_Graphics_Principles_and_...
- GPU Gems
- SIGGRAPH
- Real Time Rendering Book