I intend on doing a project with one in the next few months and I want to get other developers feedback on the device and kind of a general synopsis on the feelings regarding it, as the emulator can only give so much "immersion".
There also hasn't been much discussion about this device on HN (at least for a few months) and I'd like to know how developers feel about it: pros, cons, first impressions, etc.
- The display technology is very nice. I was very impressed by how good the object permanence was: when you put an object somewhere, there is no lag or jitter when you move your head and it stays anchored to the spot. The holograms are reasonably bright and opaque.
- Also, when you pin an object somewhere, it stays there even when you walk around the room. It even stays if you pin it in like the middle of the room where there are no obvious reference points or anchors to use.
- The field of view is neither great nor terrible. It's usable but more would of course be better.
- The major downside is the interaction: "air-clicking" is not great and the gestures to trigger various actions aren't very reliable. It really needs hand controllers like the Vive has.
- The unit itself is comfortable, much more so than the Vive. There was an annoying lens-flare-like glare below the field of view. Not sure if that was my unit not set up correctly or a problem common to all of them.
Overall I'm quite impressed, although I probably wouldn't buy one even if I had $3,000 to burn. V2 will probably be the one to get, if they expand the FOV.
Also the sound is well done. Similar to vision, it doesn't cover up or plug your ears so you can maintain awareness of your environment. I didn't have high hopes for the sound quality but I was pleasantly surprised.
The gesture thing needs a ton of work though. There are only a handful of gestures, none of them interact directly with the virtual objects. You need to turn your neck to look at something, then make a click gesture to select that item. There is no ability to grab something, mold clay, punch bad guys, etc. Just look around and click, occasionally drag though if you try to drag something out of the field of view of the sensor it gets dropped.
I agree on the gesture reliability being a current limitation, but wouldn't a move towards hand controllers sort of be a step backwards? Interested to hear what other people think but seems like moving away from hand controllers is an inevitability, will just take some time and further investment in gesture control.
Clicking/selecting objects with gaze is often an antipattern. Much better to use alternate input.
Analytics is kind of a mess.
Everybody recommends unity but performance will suffer. I wrote my own framework instead. Most of the open-source code is bad, if you have figured out how to make apps it's a competitive advantage. MS wrote literally thousands of new APIs for UWP and mixed reality so many many features are barely documented with no real world examples.
It's a totally new paradigm in UX. Most designers fall back to poor decisions like using small buttons or overly detailed models.
Feel free to ask anything specific I'll do my best to answer.
I have the hololens on about half the time. So I've spent a ton of time in it.
Yes surprisingly the Hololens is "just" an x86 Windows compile target. I've found a lot of software that "just works" as long as the UI isn't hardcoded.
I demo'd google cardboard VR to a bunch of kids (prob about 200 ish) who thought it was amazing. They really enjoyed it and the feel from teaching them using VR was that this was another tool people will use in the future.
Kids are already wired up with headphones, phones in pockets, phones wired to external battery boosters. I say this because I thought cumbersome cables from phone to goggles would be a no no and the technology would have to wait for a wireless solution. But I think I am wrong, kids already have cables snaking around their bodies.
Personally I think external cameras that can show 'the outside world' inside the VR world is the future. I'm interested in your thoughts on AR... this I see as a great tool for industry, but not for the general public?
People really don't like the concept that I can see a world they can't experience. Some people become offended. Making people feel comfortable has a long way to go, but as you mention kids start with no preconceived notions so this may not be a stigma for future generations.
Also, I've looked through the MVA for the HoloLens and done some messing around in Unity, but are there any good, active blogs about the HoloLens? I know there are a few like Mike Taulty and some other blogs that might post about it occasionally, but I would like a "Morning Dew" for HoloLens.
The AR blogs I find are mostly content-free, not exactly their fault because most demos are just PR demos and the real dev is under wraps at private industry. Similar to the comments here, whoever knows what's up is not talking. Lots of NDAs and patents around the technology.
As far as a design workflow I've heard many designers are happy with their existing 3D tools like Maya and Unity. Any standard 3D object and texture can be loaded into the hololens.
I am interested in learning more about Analytics/monitoring that you did. Would you be able to share more info? Please let me know how we can sync up so that I could learn more about it?
I also see a class separation between people who are augmented and who are not. It's going to blow apart industries and relationships. Eventually.
Things I like:
-Let's you have an infinite number of virtual monitors with applications such as word, outlook, browsers etc. -Developing for it is really easy with tools like Unity -Battery life is not too shabby, rarely have to take it off to charge while I'm doing something -Great demo piece
Things to work on:
-Field of view isn't terrible, but could still use improvement -Price point precludes a lot of consumer applications -Feels like you're always wearing sunglasses indoors. This takes away from the augmented reality bit as it can be pretty hard to interact with the real world sometimes (e.g. hard to read my real monitor when I have it on) -Gets kind of uncomfortable on your nose after a while, though that may depend on your face morphology -Interacting with voice commands in an office setting can be awkward/amusing -My colleagues think I'm never working
Also, what steps did you take to fully learn the technology (MVA, Blogs, VRDC, etc.)?
I can do whatever I want with it. I was surprised to find out after I joined that the regulations are really not that bad, less even than when I was at Microsoft. I think it really depends on the department and the function. The HR department for example is a lot more stringent (though believe it or not, they also explored VR with our team).
They have a pretty good set of tutorials on the HoloLens site, though they're kind of monotone and over polished for my taste. I think the key thing is to know Unity, which I've mainly done through doing and exploring their API or youtube videos. MVA is a pretty good resource, though I haven't needed to use it.
Are they usable for working? Is the resolution good enough that you can use virtual displays for coding / browsing / looking at YouTube videos?
But over time, the steady improvement of touch displays, SoC's, and supply chains brought sub-$400 devices that are likely more powerful than my desktop PC at the time.
I think they made a reasonable compromise between FOV and ergonomics based on the current state of available hardware tech. While UI and software applications remain to be developed to make use of the tech, I'd be the least concerned about things like FOV going forward. It seems like a tradeoff that was made to avoid the bundles of tether cables and high-powered host machines seen in much of the current VR space.
Will be interesting to see where things go given they've switched suppliers for key parts supposedly in preparation for a 2019 release.
From an application developer's perspective, the only difference between HoloLens coding and Mixed Reality coding is that when constructing 3D scenes your HoloLens app should have a transparent background so the person can see their room through the viewport because that's what they're buying the expensive headset for and in Mixed Reality you should have an opaque background because it's VR not AR.
The really big thing though is that $299 is roughly what you'd otherwise pay for a pair of big monitors. Full on virtual desktop support with floating windows for these devices is being shipped to every Windows 10 machine starting this week via Windows Update with the intent being you don't need old-school monitors just work in the headset, or with your monitors, or however you want.
Windows now has (or will shortly depending on your Windows Update timing) a built-in developer mode simulator for application testing of Mixed Reality code without a physical headset. The simulator is still a little buggy and incompletely undocumented (remember to shut it off when you're not using it) but it's pretty incredible and more than enough to start building and testing applications.
[0] https://www.engadget.com/2017/04/12/acer-microsoft-vr-mixed-...
[1] https://developer.microsoft.com/en-us/windows/mixed-reality
How well will this work though? Virtual desktops exist for the Oculus Rift and the Vive but from what I understand, the limited resolution doesn't make it a great experience, and can't really replace monitors yet for text heavy work.
> it's amazing how few people are paying attention to this announcement
It's not amazing because they're not showing anything except what appear to be fake, marketing videos. They've always been cagey with the HoloLens and what it was actually like and it looks like they're continuing down that line.
I'll also add I'm not sure what you mean by "they're always cagey with the HoloLens" it's been a shipping device for about a year now, they're available in the wild, lots of us outside of Microsoft have plenty of time in the HoloLens. The suggestion that they've been cagey about what's coming for the Mixed Reality headsets is similarly nonsense. If you want to stay a Microsoft hater because of something some guy said about Java more than 20 years ago, that's totally your privilege, I personally prefer to work in the present.
Are you implying hololens could replace my monitors (in a practical sense I mean)?
One of the biggest problems I run into is even with a 30" 4k display, I'm always out of room to run concurrent windows.
Gesture recognition was really good when I was testing it, it really doesn't take long to learn. Rarely would I have to tap twice. The standard gesture is bringing your index finger and thumb together, I definitely had a hard time explaining that to users.
The clicker makes things easier however.
Pros: Very intuitive controls after maybe 5 minutes of using it. Building in voice commands is easy in Unity, can't speak for other platforms. AR has more practical applications (but VR is more mature). Microsoft listens, and will try and add features that people ask for. The forums were very helpful for someone a year out of programming to come back and learn. Spatial mapping was really cool. I didn't think something could be that accurate in the space of a few minutes.
Cons: Controls can be a steep learning curve for older individuals (based on my experience). Development setup was hard when I started, but has gotten much better from what I hear. Trying to show what you're doing in the hololens live was very hard. Had to build that in, but I think now they've cleaned that up as well through Unity. I think the previous con points out that this is a very new platform, and things are going to change. Keep that in mind, and don't get too mad if things break. It's not super powerful, so you'll have to move to directx if you want to pull every inch of performance out of it. Shaders are your friend (I'm a newbie when it comes to game dev, so this was a lot of learning for me).
I know that people mentioned that FoV is bad, or could be improved, but honestly I didn't have a problem with it. With AR, and how you can still see the world around you, it wasn't a hindrance for users that would demo. That being said, I wouldn't oppose an improvement!
It's surprisingly good at "drawing dark". It can't, really, so it just puts a neutral density filter in front of the real world to dim out the background. This, plus some trickery with drawing intensity, allows overlays on the real world. At least the indoor real world; the grey filter is fixed, and the display will be overwhelmed in sunlight.
The field of view is too small for an immersive illusion. The resolution is too low for the "infinite number of monitors" some people want. It's useful for putting an overlay on what you're working on, which suggests industrial and training applications.
It's not clear there's a mass market for this. Certainly not at the current price point. If it became cheap enough to sell to the Pokemon Go crowd, it might work for that.
A useful metric is, "Is it good enough for Hyperreality?"[1] As yet, it's not. But it could get there. Watch that video. What hyperreality needs is 1) really good lock to the real world, 2) adequate but not extreme resolution, 3) wearability, 4) wide field of view, 5) useable under most real-world lighting conditions, and 6) affordablity. The Hololens has 1 under good conditions, has 2, arguably has 3, lacks 4, 5, and 6. Not there yet.
And I was able to try on on at a meetup.
Considering all the whole thing is self contained and is handling the rendering on the device is amazing. With some of the dev tools you can see it building models of everything and one in the room in real time.
I played the Conquer game which was fun to watch the characters hide behind chairs and stuff. And the maps sort of build them selfs to the room and worked even with lots of people in the meetup.
Getting the hand gestures take's a second but are pretty intuitive with "clicking" stuff sort of pinching your index and thumb together.
The field of view is actually only the glasses under the visor. The visor I believe is more to help with improve contrast and block a bit of light.
Currently I'm working on a large HoloLens project for the aircraft industry. But the amount of possibilities I can think of with a HoloLens (or similar device) is limitless.
The HoloLens has amazing tracking and latency. In a couple more years, when HoloLens and/or competitors release a device with a large field of view, HoloLens-like tracking/latency, and leap motion-like hand recognition, it's going to be very exciting.
To speculate, I'd say VR will find its killer app in gaming/entertainment (similar to TV), and AR will become the next great I/O interface between humans and computers (similar to phones/tablets).
-While the FOV is less than ideal it is not experience breaking
-The device is more comfortable than most headgear technology out today(there is also adjusters such as a nose piece and headband that make it more comfortable for a long duration)
-It is intuitive. This device can and will be easily picked up by many people. We found older people who could barely stand trying to operate a smartphone throw it on and almost instantly understand it. There is just something about this device that makes people feel like they can handle it without too much work. And the fact is that they can, it is very simple to use and the hand gestures may be the main reason for it.
-While the hand gestures may not be the most reliable it does come with a clicker that remedies this quite satisfactory. To give this Vive-esque controllers would completely ruin the experience and what Microsoft was trying to accomplish.
-The UI and operation are unobtrusive which means that while it doesn't have much productivity use right now, it will in the future.
If you would like to get a better idea of what the HoloLens does and can do we urge you to find our YouTube channel. We try and deliver our content in a non-technical way as to explain how an end user really see's it without all the tech jargon getting in the way.
Very narrow field of vision: I had to fish for objects turning on myself and looking up and down. Not good for AR.
No black, obviously. They can't block light from going through rendered objects. This in turn makes colors somewhat ghostly.
Very stable. Once I get an object I can walk around it and it stays there like a real one.
"Clicking" on an object is hard, but maybe it was hard with a mouse when I used it for the first time.
And Lego/Minecraft on the tabletop.... no thanks I don't want games set in my lounge room, that's an incredibly boring place to set a game in.
Where's that bit? Where's that tool? How do I repair this thing? Who attached that bit to this thing.
From a developer standpoint, it's terrible. Unity only just now supports UWP apps and only just, many many libraries just don't work. We are making a collaborative 3D app that needs access to the entire screen and a lot of system level resources. The only nice thing is that the anchor system is an operating system level abstraction.
TL;DR: After using one regularly for a few months, I'd say pas on this device, it's a barely usable AR platform with poor battery life and poor FOV, and it's absolutely unusable AR gaming platform.
Positive: Voice Commands, No Computer needed, Unity is great - development is easy
Negatives: Field of view is just weird, Not as intuitive as it could be, Cannot sell it - dev only
My observations:
Getting it to recognize my air clicks is the bane of my existence. Object permanence works very well.
Before I used it, I thought people were hyperbolic when mentioning the narrow AR FOV. It really limits the experience.
Moving objects around is very annoying when it doesn't seem to recognize half my gestures. However when it does recognize my gestures it's fairly straight forward to move objects on each of the three axis.
Peers make fun of you for wearing something cumbersome.
After playing with it for a while though, I have to conclude it's not yet a consumer product and probably won't be for many years. Maybe it will find a Place in the enterprise.
I would say info isn't that sparse (as it used to be). Search the Holographic Academy, watch their youtube channel, and subscribe to the Windows MR blog/newsletter.
Have demos of stuff I built, feel free to DM if you want to see.
It's a fun proof of concept, but not much more.
There is a trick to getting the hololens comfortable, just make sure the inner-strap is tilted ~45 degrees. This might sound obvious but I observed a lot of people just leaving the inner strap horizontal, which means a lot of weight is on the nose and the strap has to much pressure on the forehead.
As for the tech, same opinion as everyone else. Idea is awesome, works great, FOV is limiting factor.
As for development, was super easy. Unity has inbuilt support, so just press play in unity and it automatically gets sent to the Hololens.
I developed for the Hololens and wore it a lot over a couple of weeks.
Get a Vive now, wait 2 years before getting an AR device.
If I want to watch a movie in a public area for instance, I'd love a VR mode to tune out everything else.
The user experience --------------
HoloLens is mesmerizing. I'm not big into VR or anything, and will often make the arguement that VR hype will die out and is a fad. But there's something very different about what Microsoft is doing. The ability to incorporate reality as a first class citizen in your 3D applications (or vice versa) is groundbreaking. People often complain about the FOV when they first try it out, and I had the same complaint, but your brain is able to compensate once it gets used to it, and then you stop noticing it. That's something you don't get from a short trial of it at a tech demo. The user inputs are indeed very clumsy still. We'll need vast improvements in this area before HoloLens can feel immersive. But the amazing thing is that this first pass isn't that bad. It can track your hands and it's a computer that sits on your head. I mean, come on! I'm only 22 and even I think that's amazing.
The developer experience ------------------------
One of the major short comings of HoloLens development is its dependency on Unity. C# isn't the problem. I love C# and use it daily now for web development. The problem is Unity uses .NET 2.0, and good luck finding C# libraries that are compatible. So for every new thing you want to do, you're going to have to find a "Unity compatible" C# library, which is very annoying.
Unity will work for what you need most of the time, but it turns out if you want to try something custom (like your own gestures) then you're out of luck, because the Unity APIs are limited in that way.
I suppose I'm mostly just not a fan of Unity's component model. Constantly switching between adjusting settings in the IDE and coding feels like a bad way of developing.
Okay, so maybe you want to try something a little lower level. Microsoft offers a C++ API as well, and for the most part this is what you want if you need to harness the limited power of the HoloLens. I haven't played around with all of the APIs, but I know of one in particular that left a bad taste in my mouth (this applies to Unity too) -- the spatial anchor API. For those of you who are unfamiliar, the spatial anchor API is the only way to acquire a durable and persistent reference to a real world location. This is done (I think) with sensor data (orientation, lighting, and images captured by the 4 on board spatial mapping cameras.) This is really an incredible feat of engineering, however it produces a binary which is around 15MB. Far too large to store in a database at scale. I'd like to see MS open up raw access to those sensors so middleware developers can try their hand at improving this aspect of HoloLens.
If C++ isn't your thing, there's a library called HoloJS. You guessed it, it's a JS runtime for HoloLens with access to native libs. I actually started my own variation on this (called HolographicJS) before Microsoft released theirs, but I'm happy they've taken over.
The future ----------
So what does this all mean for a device that seemly has its share of problems to overcome? Well, after trying it I'm fairly confident that MR as Microsoft calls it, is here to stay. The ability to mix reality with virtual reality, and augment that with a layer of environmental understanding is really incredible. I think we're just scratching the surface of the possiblities.
HoloLens is the first in a new field of devices that I believe will come to replace all forms of computers we currently use: phones, laptops, desktops, tablets, etc. Even things like IOT devices. Why spend time building your own interfaces when you can just augment the users'?
If v2 had better FOV and improved input tracking, I'd consider it a major success. But if it also included improved spatial mapping and a reliable GPS, that could bring us into a whole new world, quite literally.
The way I see it, the first company to solve outdoor use of an MR device, and solve what I'm calling the "universal spatial map" problem, will run the world of tomorrow.
Imagine every machine being capable of interfacing with you without the need for a screen or separate device. Imagine walking down the street, gesturing to a restaurant and placing an order before you even get inside.
Further down the line. What if we could transfer consciousness out of a dying car crash survivor into a computer. What if that person could then be virtually transferred back to the scene of the accident, to be greeted by those who are augmented.
Anyway, that's all crazy futurism; but the point is that reality starts with what is being done with HoloLens, and I think it's an incredible thing to be a part of.
To me, HoloLens feels like the Apple II.