Dream is now available in early access on the Oculus store, so we'd really appreciate any feedback and thoughts people have. We truly believe that immersive technologies like virtual reality can make remote work and collaboration better than existing 2D form factors - especially as the new standalone VR headsets like Quest come to fruition in the coming year.
Dream has been built entirely from scratch, so we got to rethink a lot of the stack. We prioritized certain things, like networking and UI, and we're really proud of the outcome. Doing so also meant it took us a lot longer to bring a product to release, since there was a lot more to do - but it allowed us to integrate WebRTC at the native level as well as chromium (by way of CEF) so we can do things like bring up our keyboard when a user hits a text field.
Hope people like it, and want to say thanks to everyone that made it possible!
To respond to some of your other questions - one thing that we've noticed isn't super clear is that Dream is not doing any form of desktop sharing. We integrated chromium at the native layer (by way of a forked version of CEF) and the content views are all web browsers. This allows for a level of integration with the rest of our stack in a way that is difficult or impossible to achieve if you're doing desktop sharing (we actually built desktop sharing, but disabled it in the build for now until we can solve some inherent usability problems).
We're big fans of Bigscreen, but I think they're heavily shifting their focus on entertainment and watching movies in VR together. Also, we were working on Dream for 1.5+ years when Dash was announced and were excited to see some similar ideas there! We're trying to find ways to make VR as a viable solution for remote working and collaboration, and this has led many hard decisions we've had to make - especially as we've decided to build the entire stack. This obviously meant it took us a lot longer to get something out there, but as a result Dream is a lot more intuitive and seamless than you might expect.
For example, our keyboard was heavily inspired by an early Google VR experiment we saw, but after building out a version of it we quickly understood why it wasn't getting people to a viable text-entry solution. We built our own collision system and "interaction "engine" to allow views and apps in Dream to respond at the event level of "touch start, touch end" similar to what you'd expect in building an iOS app - and underlying this the interaction engine would be updating the collision / position of everything in real time. As a result, we've seen people hit 30-40 WPM on our keyboard due to the tactile cues we've included (audio/haptics) as well as a kind of activation region, which allows you to really time and feel out the key press. Definitely hard to describe this or show this in videos since it's all happening at 90 FPS - but hey, it's a free download so give it a shot!
Dream never asks you to revert to your monitor or take off your headset, this was a strict rule. This means that everything from logging in, to inviting someone new to your team had to be possible in VR. To accomplish this, we create a kind of chromium integration with Dream so that we could run web views that manipulated our engine directly. To us, asking the end user to remove their HMD for any reason is equivalent to asking them to restart their computer - it's really not acceptable.
Our goal is to demonstrate how immersive technologies like virtual reality can enable remote collaboration and communication use cases. Especially in terms of how VR, by comparison to existing 2D formats of video/voice, provides an improved layer of presence through nonverbal communication cues.
I will say though, the book itself was a big part of me personally deciding to get into this project after I left my last one.
It felt really amazing when I was alerted by a sensor in the gallery that someone was visiting it and I could teleport back to the gallery to meet them. Their location in the virtual space (which pictures they were stood in front of) said something about the pictures that they liked. I could read something about their personas from the avatar they were using (especially in conjunction with other scanning tools). My own little gallery was just one of many and other organisations and groups created much more impressive interactive environments (admittedly a lot of them seemed to be for various forms of role-play, some very unsavoury).
The promo video shows the participants in effect in a completely conventional conference room - one screen and chairs around a table. The wider space doesn't seem to contribute functionally at all - it's a pretty backdrop but doesn't display more info that contributes to the meeting. So I'm curious - could this sort of capability be used to create more dynamic interactions - or are we limited by the tech (tethered interaction by seated people) to more constrained situations? (please don't get me wrong - I'm supportive of the concept - but I'm encountering pushback from colleagues and customers who don't see the potential)
Dream doesn't allow for locomotion by design. Dream is meant to be a place where people meet to be productive. The environments are intentionally pretty but not distracting. The focus is on interaction with the other participants and the shared content. We feel that removing locomotion and reducing dimensionality is how we will make the interactions with Dream simpler, especially for new users. Mechanisms like teleportation are super fun and certainly add to immersion and are the right choice for all sorts of VR experiences. However, Dream has been built from the perspective that users are here to collaborate and then go back to real life. In that context, something like teleportation is fun and novel the first time you use it, but the 10th time, we feel like most users would just prefer a menu. The overall idea being, reduce dimensionality to increase precision and simplicity.
I'm happy to hear a critique of this philosophy. Ultimately we have to create software people love to use, and we certainly understand that we might be proven wrong about this.
However he told me that VR can not be used for anything productive right now because of many problems. One thing is, only very few people can wear a VR headpiece for 8 to 10 hours straight without getting serious headaches and dizziness. The resolution has to at least be one order of magnitude higher for meaningful sized fonts being properly readable. The head pieces have to be one order of magnitude lighter. If you want to do more than a 2D wall displayed like a canvas in 3D you have to solve the problem that your eyes automatically try to focus differently on different depths which is also one major source of headache.
All in all I was convinced by him that the VR technology is at least 2 generations behind what you would need for serious work. Until then all kinds of software, SDKs and Hardware will change dramatically. Hence investing in VR productivity software development right now is a complete waste of time and money.
Do you think advances in technology just happen by themselves? We need people to invest time and money RIGHT NOW to get to better software, hardware, and mainstream adoption.
However, these "full day" applications are definitely a few years out. We've seen sessions of dream where people have little issue of up to 90 minutes, and on average something like 45 minutes. Effectively the duration of a meeting - and this is where we're focusing our efforts, where productivity and collaboration intersect.
I think that from the perspective of investing into these technologies as a financial institution to change the way people access financial data all day, it's a not here yet. However, to get there, we need to start building these platforms today. This has been our intention, and think that even in the meanwhile - being able to have a remote meeting with 4-6 people that are all over the world. Then being able to bring in content like presentations or generic web pages will provide a level of utility out of reach of even some of the $300K teleconferencing solutions out there.
Regardless of the hype the industry has received, these headsets are still in their infancy. One of the big steps is about to be taken by the upcoming Quest headset due to launch Spring 2019 by Oculus. The big step this HMD takes is detaching the umbilical cord to the computer, and providing the first fully 6DOF headset (both HMD and controllers). They even through in a healthy resolution bump, which we're excited about. This well could be the 'blackberry' moment for VR.
It sounds like you have easy access to VR HMDs, so if you get a chance to plug into an Oculus rift any time soon I recommend you try it out - and would definitely love your hands on feedback.
We've done a fair bit of jiggering to make sure a majority of web based content is both legible and usable - and our UI has been built to try to be as intuitive as possible, and eliminate a lot of the bumbles that we too associate with many VR experiences.
It's a free download, and you don't have to create an account if you don't like - once you download, you'll be presented with an account sign up / log in form where the keyboard can be used and messed with a bit. We also use chromium for all of our login / account creation flow in VR - so you can get a taste of what that feels like as well. If you want to try something like Trello out, just create a throwaway account and never verify the email - then you can pull up a website like Trello or NYT, where you can assess the usability and legibility.
I think that if you're coming from a place of comparing this to existing 2D based collab tools like Skype / Zoom etc you'll have a hard time seeing the benefit, but if instead you try to look at how those tools are insufficient compared to a real-life meeting you might see where we fit. Our goal is not to replace 2D based methods, but to allow for a level of presence previously only possible with in-person interactions. This shines in particular in situations where you're meeting with three or more people along with content at the core of the meeting.
Hope you get a chance to try it, and would love to hear what you think and how we can make it better!
Are you using straight CEF or have you improved the compositor to composit directly into a texture? IIRC CEF only provides the composited web page in a bitmap and then you're going to have to do repeated texture uploads which is going to be a drag.
Does this support VIVE too or only Oculus?
Really excited for new HW and capabilities to become available commercially. We built out our keyboard to be effective without anything but what's currently available (6DOF HMD with 6DOF controllers), and we'll continue to expand support for commercially available capabilities. Maybe it's an unorthodox perspective, but we really only want to ship and represent capabilities that any user can attain easily - and not tease things that are soon to (but may not ever) come.
How is this different than BigScreen? It allows people to view computer screens (video games, browsers, movies) in VR with other people.
Edit: saw that you answered the bigscreen question earlier.
Let me know if my response to the Bigscreen question above is sufficient, or if you have other specific questions about anything. Would be happy to dig deeper into anything. Super excited to share the hard work of our team - we've basically been quietly coding for years now, so this is the first time we really get to talk about what we've been up to.
Dream is currently only available for Oculus Rift - and the video was actually shot with an in-engine camera that we developed, and captured on a mirror pipeline at 4K - so I think the blurriness in the video may be an artifact of streaming potentially? Here's a link to the vimeo, which might let you watch it at the set 1080p resolution we scaled it down to: https://vimeo.com/291432708/4c32095226
We're excited to get Dream onto other HMDs, especially the mobile standalone ones coming soon - really great that the Quest is going up to 1600x1440 as it will make use cases like ours work even better!