The video at https://www.youtube.com/watch?v=9NOPcOGV6nU&feature=youtu.be is worth watching, especially the parts showing how the model gradually constructs and improves a labeled 3D mesh of a live room as it is fed more visual data by walking around the room.
--
On a related note, Magic Leap has been trying to find a buyer for the business for several months now:
https://www.roadtovr.com/report-magic-leap-buyer-sale/
https://www.bloomberg.com/news/articles/2020-03-11/augmented...
The company has a cool name and the product area is divisive. Some say it is vapourware and nobody wants Oculus Rift style VR. Others are gung-ho. It's like Bitcoin all over again.
Although this tech is being done with AI, it was being done with non-AI approach two decades ago for movies/TV. But it wasn't as if people ported this tech to their smart phones from the SGI desktop monsters of yesteryear.
It seems super, super difficult... there are free-flowing liquids, and since this is an esophagus/upper lining of the stomach which is changing in form quite drastically so often. How would you guys approach this problem?
Did they ever make it into real life practice?
https://www.youtube.com/c/okreylos/videos
5 years ago he was active in the Vive VR world
I have seen random still images used for this kind of thing: https://nerf-w.github.io/
I haven't heard of any equivalent of EXIF for video. That goes a long way when trying to make sense of random video both for camera settings as well as GPS location if you're trying to correlate multiple videos.
https://news.ycombinator.com/item?id=24071787
EDIT: @bitl: Tremendous, thanks for the reply. Would be amazing to be able to build these scenes just by walking around scanning a room with your mobile phone while it records video for processing the frames into scenes (especially considering mobile platforms with a depth sensor for enrichment of the collected data).
NeRF would probably produce a much better final result but the Atlas approach (no need to train something from scratch) is the only one that can hope to be run in real time which is vital for some application.
We can now make tiny virtual cars do stunts off object in the real world: https://www.youtube.com/watch?v=9NOPcOGV6nU&feature=youtu.be