There was a discussion on here the other day about the PS6, and honestly were I involved in consoles/games production anymore I'd be looking seriously about how to incorporate assets like this.
It's good for visualizing something by itself, but not for building a scene out of it.
If you want a real cursed problem for Gaussian splats though: global illumination. People have decomposed splat models into separate global and PBR colors, but I have no clue how you'd figure out where that global illumination came from, let alone recompute it for a new lighting situation.
I wonder if it's possible to do some kind of blendshape style animation, where you blend between multiple recorded poses.
This is objectively violating accessibility guidelines for contrast.
The best thing about reader mode is that there’s now always an escape hatch for those who it doesn’t work for.
I'd also like to show my gratitude for you releasing this as a free culture file! (CC BY)
I would have thought that since that reflection has a different color in different directions, gaussian splat generation would have a hard time coming to a solution that satisfies all of the rays. Or at the very least, that a reflective surface would turn out muddy rather than properly reflective-looking.
Is there some clever trickery that's happening here, or am I misunderstanding something about gaussian splats?
Sometimes it will “go wrong”, you can see in some of the fly models that if you get too close, body parts start looking a bit transparent as some of the specular highlights are actually splats on the back of an internal surface. This is very evident with mirrors - they are just an inverted projection which you can walk right into.
E.g. if you have a cluster of tiny adjacent volumes that have high variability based on viewing angle, but the difference between each of those volumes is small, handle it as a smooth, reflective surface, like chrome.
But I just wanted to say the best way to interact with Gaussian Splats on mobile I've seen is with the Scaniverse app. Really, the UX is great there.
Black text on a dark grey background is nearly unreadable - I used Reader Mode.
https://superspl.at/view?id=ac0acb0e
I believe this one is misnamed
I presume these would look great on good vr headset?
I wonder if one could capture each angle in a single shot with a Lytro Illum instead of focus-stacking? Or is the output of an Illum not of sufficient resolution?
Should be possible to model the focal depth of the camera directly. But perhaps that is not done in standard software. You still want several images with different focus settings
I'd love to know the compute hardware he used and the time it took to produce.
Likely triangles are used to render the image in a traditional pipeline.