Correct: it generates a surface light field from a set of RGBZ images that sample the scene surfaces. In turn, that's simplified into a set of alpha-textured quads that very accurately represent the scene from a specified headbox.
View frustum culling wouldn't have been enough to render this stuff (https://www.roadtovr.com/preview-google-seurat-ilm-xlab-mobi...) in VR on a mobile GPU. :-)