I say this not to either criticize you or excuse the mistake by Oculus (they really needed to countersign their cert with a timestamp server), but to educate. These are non-obvious issues to people that don't follow the VR sector.
Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.
Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.
But there's much more. Here is a paste of a comment I made elsewhere:
Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.
For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset; there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.
Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.
All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.
Not to mention, the premise that monitors don't have drivers is also mistaken. They may not be necessary, but they are available[1]. And, the decision to sign kernel drivers is not a poor choice by Oculus, but a mandate from Microsoft for Windows 10 build 1607 and above.[2] A cert is, indeed, necessary to function.
Hope that was informative.
[1] http://www.aocmonitorap.com/my/download_driver.php [2] "Starting with new installations of Windows 10, version 1607, Windows will not load any new kernel mode drivers which are not signed by the Dev Portal." - https://docs.microsoft.com/en-us/windows-hardware/drivers/in...
Most (I won't say all) certificates expire. However, there's a huge difference between an expired certificate and one which is renders a driver invalid - and this is one of the two places Oculus erred.
When you sign a driver, you want it countersigned by a timeserver. This cryptographically assures that the cert used was valid at the time of signing, so the signature on the driver remains valid even if the signing cert expires (the crypto ensures a hacker can't just change the metadata with a hex editor). It allows the OS to confirm that the code was signed by a cert that was valid at the time of signature (even though now expired). Without it, the OS can only assume that the code was signed the same day as the validity check. Two days ago that was fine, but yesterday the signing cert expired and everything broke.
This was screw-up number one. Apparently, during the build process from Oculus's v.1.22 to 1.23 release, the timeserver countersignature was removed. This is obviously a mistake, because that took place about 30 days ago. No sane person would assume that they intentionally did something that would bring down their user base in a month.[1]
Obviously the second mistake was letting their certificate lapse. This was compounded by the fact that their update app was signed by the same cert, so they couldn't just push a quick fix (because the updater didn't work).
So in short, signatures don't expire, but the certificate used to do the signature does. With a timeserver countersignature the code would have kept running but no new code could be signed from the old (expired) cert.
Oculus missed some pretty big devops gaps, and suffered a big black eye for it.
But it had nothing to do with DRM, planned obsolescence, needing to connect to the internet, or Facebook data capture.
[1] Other commenters have mentioned that if a timeserver is down at the time of a build, it can fail to add the countersignature. Maybe that's what happened?
The short answer is:
- a "certificate" contains a number of things: a portion of an asymmetric key (either public or private), and a ton of metadata[1] to give information about that key: validity period, algorithms used, version, etc.
- a "signature" is the result of a crypto operation on data that proves the data (a) has not changed since the operation, and (b) the person doing the signing owns the private portion of that asymmetric key.
As I said in my other message, a signature doesn't expire, but it's related directly (and generated by) the certificate used to create it. So if that creation certificate expires (or is revoked) it calls into question the validity of the signature(s) created from that certificate.
Let me know if you're interested in more background on asymmetric cryptography and the relationship between public keys and crypto, private keys and signatures, and the role of certificate authorities vs. a PGP-oriented 'web of trust'.
[1] https://en.wikipedia.org/wiki/X.509#Sample_X.509_certificate...
Wait, is this new. I haven't used my Oculus in over 6 months because of how hard it was to interact with the desktop and a few other things while in-game. Is this standard feature now for Oculus' Framework?
But I use it and it's amazing.
Edit: Here's the "sizzle reel": https://www.youtube.com/watch?v=SvP_RI_S-bw
Here's just someone using Home: https://www.youtube.com/watch?v=sMjlM5vFSA0
And here's a blog post about it: https://www.oculus.com/blog/rift-core-20-updates-beta-coming...
Could you share more info on this? Is it actually possible to poll the devices at that resolution from code?
So, the SDK takes all the information in directly, does its calculations, and exposes only the resulting positions and orientations for hands and head. This resulting info is what developers typically use.
Here's an excerpt from a blog post[1] regarding the IMU and sensor fusion:
> With the new Oculus VR™ sensor, we support sampling rates up to 1000hz, which minimizes the time between the player’s head movement and the game engine receiving the sensor data to roughly 2 milliseconds.
> <snip interesting info about sensor fusion>
> In addition to raw data, the Oculus SDK provides a SensorFusion class that takes care of the details, returning orientation data as either rotation matrices, quaternions, or Euler angles.
Note that this blog is from back in dev kit 2 days. It's possible that Oculus removed the ability to retrieve raw data; in my hobbyist efforts I only use Unity's integration and don't work directly against the SDK.
[1] https://www.oculus.com/blog/building-a-sensor-for-low-latenc...
Face it, today's VR headsets simply are monitors that you wear on your face (Head Mounted Displays). Anyone thinking otherwise is simply lying to himself to make it sound more complicated than it is. Those include a few input peripherals as well, none of them which is particularly complex (valve's lighthouse system is probably as complex as it gets).
And lastly, none of these points should require a certificate. Every computation can be done locally, without the need of an internet connection.
To be a bit more specific, let's break down the arguments (I have nothing against you, I am just interested in those):
> Monitors work without low-level drivers because their maturity (and lack of innovation) allows the hard stuff to be embedded in the operating system. VR is not at that state; it is emergent, and the capability stacks require additional integration into the OS. Vendors frequently add unique features, and will continue to do so for some time, making standardization difficult.
This is true... Somewhat. For now, the only integration that has been done in the Linux kernel is DRM (direct rendering manager) leasing [1], which allows an application to borrow full control of the peripheral, to bypass compositing. That, and making sure that compositors don't detect HMDs as normal displays (so that they don't try to display your desktop on them). Please note that none of these are actually needed if the compositor is designed to support HMDs from the ground up. Those are just niceties, and the HMD is just considered like a regular device.
> Even at its simplest level, a VR headset with 6 degrees of freedom is two monitors that must remain in absolute synchronization while also returning positional information to the CPU. This alone is enough to go beyond "standard monitor driver" functionality.
Even if those monitors are physically separate, this is likely something handled by the HMD board itself. The monitors DON'T return positional information, they just display stuff (accelerometer, gyro, compass, etc. are just other peripherals that happen to sit on the same board).
> Oculus (and Steam, via SteamVR) engineers a plethora of low-level code to reduce latency and add features. It's not just a monitor, but a whole set of SDKs, APIs, devices, and drivers.
Just like every peripheral under the sun, isn't it?
> For the Rift, the hand controllers are wireless input devices refreshing at 1,000Hz; the sensors (to know where you are in the room) are USB devices with tight 60 fps synchronization to LEDs on the headset
Believe it or not, frequency and latency are probably not the most complicated thing with the lighthouse system; these specs are actually not uncommon for USB devices (I admit that I don't have a good example in mind, though).
> there is a custom audio stack with spacialized audio and ambisonic sound; video needs specialized warping to correct lens distortion, interpolate frames, and maintain a 90 fps image, etc.
We are NOT talking about HMDs anymore at this point, and these feats have been accomplished countless times already, in various systems. The first one already exists in multiple forms of HRTF a bit everywhere, including openAL, and would probably be a lot more common if Creative didn't try to sue everyone into the ground as soon as they try to do something interesting. The second thing (distortion correction) is not really complicated, and was done in Palmer Luckey's first proof of concept (or was it John Carmack who implemented it). Interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.
> Not to mention, the system creates a virtual monitor so you can see your 2D desktop while in VR. You can reach out and "grab" a window, rip it from the desktop and position it in your VR world. Pin it, and when you enter a game that window remains and is fully interactive with the Touch controllers emulating a mouse. Maybe you want to play Spotify in the background of a table tennis game, or be able to check a Discord screen while working through a puzzler, or watch YouTube videos while flying your ship in Elite:Dangerous. One guy set up a video feed from his baby monitor so he could watch his kid napping while in VR. This is obviously not a standard feature of the Windows compositor.
Again, this has nothing to do with HMDs. But, congratulation, you just wrote another compositor, and reinvented multitasking. This has been done countless times, and VR compositors have been made by multiple teams. Here is a nice open source one: [2].
> All this needs to work across AMD and Nvidia, in Unity, Unreal, or any custom game engine. It's not off-the-shelf driver stuff.
Well, so has: controller support, graphics API support (woops, actually the two only thing needed), but also language support, processor architecture support, sound system support, operating system support, etc. Everyone needs a bit of code to support new architectures. Supporting the display portion of a HMD is relatively straightforward, and actually uses off-the-shelf APIs. Well, you have to correct for distortion, but I would be surprised if some APIs didn't come out [3] to support small variations between devices.
--
To conclude, yes, it's an impressive technology stack, but you could literally pick any other device in your computer, and you would get comparable the same complexity. I am not trying to undermine the amount of work that went into HMDs and their stack, just pointing out that it's relatively common and straightforward.
And a HMD is by definition a monitor on your face :)
--
On the other hand, I just read the explanation (after writing this), and I agree that having your own kernel module makes sense for some of this (especially on Windows, on Linux you would just mainline support), if you want to make it happen faster. Yet, most of the above arguments do not serve the discussion ;)
I can get kernel drivers needing to be signed, but requiring the cert to remain valid after installation is a bit of a reach, isn't it?
Edit: thank you for the detailed explanation below.
[1] https://keithp.com/blogs/DRM-lease/
Counterargument: The 16 things that happen other than just displaying images on the screen aren't relevant, have been done before, or has equivalent complexity to other systems.
Well OK. I just can't argue with that.
"A modern CPU SOC is no more than a souped up 6502."
That's true, if you ignore the integrated video, complex cache management, integration of networking/sound/northbridge/southbridge, secure enclaves, and significantly higher performance characteristics that result in subtle changes driving unexpected complexity. All of those things have been done elsewhere.
So if that's your perspective then we'll just have to agree to disagree.
Though I will point out the fact that all of those non-monitor components that you described also require custom drivers, which require their code to be signed, which was ultimately the item the OP took issue with. I'm frankly surprised that after acknowledging the amount of re-implementation VR requires, across numerous non-monitor disciplines, fusing the data in 11ms, for total motion-to-photon latency of 20ms or less, you still feel this is "common and straightforward."
But OK. I don't know your coding skill level, so this may be true.
And per this point:
> interpolation sounds a bit more complicated, I'll grant you that, but still pretty doable.
Valve has still not released an equivalent to Oculus's asynchronous spacewarp. If you feel it is "pretty doable" you would do a huge service to the SteamVR community if you could implement it and provide the code to Valve.
See https://developer.oculus.com/blog/asynchronous-spacewarp/ for details.
Internet connection is required for updates, for instance, in case you forgot to countersign your drivers against a timeserver.
Whoops.
> "Each year, the FDA receives several hundred thousand medical device reports of suspected device-associated deaths, serious injuries and malfunctions."
It is also specious to argue that a consumer product is being used for live surgeries without FDA approval.
This does not excuse the mistake, nor does it change the fact that the error will make people question the reliability of the product - as they should.
However, mistakes do happen, even big ones. Rockets blow up. Airbags have defects that make them not work. McAfee pushed out an antivirus update that deleted a Windows system file, crashing hundreds of thousands of PCs.
The important questions are: how does the vendor respond, what procedures do they put into place to prevent it from happening again, and are those procedures enough to give future buyers confidence that the issues are addressed?
Saying "that shouldn't have happened," while perhaps true, is simply not constructive.