Clause 2 Opportunity to reply
A fair opportunity for reply to inaccuracies must be given when reasonably called for.
You can file a complaint here: https://www.ipso.co.uk/Also, try using an oculus with seconds of latency and see how pleasant it is.
The real point is that Daniel Shafrir borrowed that robot and a video camera and proceeded to take credit for years of work by a team of students and staff at CMU and somehow got a BBC reporter to publish his claims without verifying any information.
1. Build an array of a few (let's say 8 or so) cameras pointing outward from the center.
2. Use a stereo matching algorithm to extract a depth map from the perspective of each camera. Keeping track of the position and orientation of each camera in 3D space, these depths become a point cloud associated with each camera.
3. Determine the 3D location and orientation of each "eye" you want to render, then render all point clouds in 3D space to reconstruct a "reprojected" version of the scene from any desired viewpoint. Of course, the farther the eyes deviate from actual camera locations the more stretched/warped the image will appear, but that won't matter much as long as you keep the eye coordinates within the physical space occupied by the camera.
Honestly I'd be kind of disappointed if CMU doesn't actually try this. It's disappointing to think that perhaps all this buzz about "hackathons" (as the article mentions) is encouraging -- even at major research universities -- quickly slapping together components to make something kind of work, as opposed to fundamental algorithm development and proper engineering solutions.
This approach is obviously wrong if you tilt your head but people don't generally do that enough to notice.
There is no "beam" or magic. Its 2 cameras, 2 video feeds. That's great and exciting if as general public, we have access to those video feeds.