You didn't read what I wrote, I'm talking about the natural progression as to how HW and SW platforms evolve. I use 3D accelerator hardware as an example of initial fragment and divergence followed by convergence.
Where did I write that there's no legitimate use cases for AR? I find games the LEAST PLAUSIBLE use case because holding your phone and moving around a virtual plane is a nice demo, but sucks for extended game sessions.
Indoor navigation and stuff like Google Lens is the most plausible use. And many of the examples you show in your ideas link like showing seat positions, aren't possible in ARKit because it doesn't have persistence.
A lot of the examples in the ideas thread require area understanding like Google's Visual Positioning System, or Tango. If you want a consumer to be able to pop open the camera and instantly have it tell him where his seat is on an airplane, you will have needed to already have stored persistent features of the interior of the plane. (e.g. Tango ADF https://developers.google.com/tango/overview/area-learning)
Look at the Tango app in Lowes (https://www.youtube.com/watch?v=KAQ0y19uEYo) This is the kind of AR that is useful to the majority of people, it's the kind of AR that is a killer app, and it's the kind of AR that won't be available without area understanding capable HW deployed for creating these maps so they can be consumed by cheaper SW-only AR stacks.