Making a game where people have to sit in front of a table, or stand, or hold the device up, is very restrictive and instantly makes you niche. You get more of a potential slice of the market making a cool 2D puzzle.
The tech was cool, worked reasonably well until -A- you moved a lot -B- low light. But unfortunately the market is about playing on the toilet, in public transport; or playing late at night in bed, in the dark. None of which really worked with AR.
As far as I see this going, it's still is looking for a "killer app" (I hate that term).
My best guess is we'll see this used a lot more in a business environment and even then, glasses are a far better approach (hint hint, hold on to your whiskers).
They're laying the platform groundwork early which makes sense since it's a bet on what's next for human computer interaction.
If they can pull the glasses off there's huge opportunity there for all sorts of things which will be really interesting and better than looking at little glass screens. Without that hardware it's super niche and I don't think has much broad use.
If anyone can pull it off when the hardware is ready it'll probably be Apple.
FB is obviously trying really hard to be there too, but I just don't think people will want it to come from FB (and forcing Oculus under their brand makes it so that it would have to be).
Magic Leap will be to whatever Apple ends up making as General Magic was to the iPhone.
The rough idea was directionally correct, but way too early with hardware that just wasn't good enough.
The same could have been said about the chromecast.
I think price matters a lot and Apple already loses a big audience because of that ( it's not their market, i know).
By reference, the IPad is still priced reasonable cheap.
Ps. The Oculus works well with minimal setup.
One of the most interesting AR games I’ve tried was Kings of Pool same-space multiplayer. So you set up a pool table (=billiard?) and walk around the room around the same table. Worked quite well, the game (AR & network) could keep up with the relatively low tempo of a turn-based game well enough. But that was mostly interesting because of novelty, not necessarily sticky for long-term repeat play.
What prompted me to explore the tech and write the blog post was indeed work on something else than a game.
Even though people rarely play it while on a sofa, would it be workable with the current Apple AR hardware?
I've been starting to use ARKit from a different angle.
Usecase - test out 3d designs of replacement classic car parts I'm manufacturing as a side project.
While crude right now, I think this sort of stuff is a game changer - I'm regretting not picking up a lidar enabled 12 pro instead of a 12 now to do better scanning and placement. I guess I'll pick up an ipad pro when it's upgrade time on that front.
Here are two quick videos I threw together:
https://www.youtube.com/watch?v=7K0-vK2wafA
Workflow:
- build 2d models in solvespace (https://solvespace.com)
- export dxf to fusion 360
- convert dxf to sheet metal
- bend sheet metal in fustion360
- export obj from fusion360
- save obj file to icloud drive
- open obj file in AR Viewer app (https://apps.apple.com/us/app/ar-viewer-augmented-reality/id...)
Sidenote:
There still isn't a replacement for physically printing the parts before I have them laser cut out of metal. I 3d print models and test fit pieces as intermediate steps too.
Really excited to see ios developers digging into ARKit more, I'm excited to see what else is possible here.
Would be cool if the object-matching tech (this[1] + more machine learning?) in ARKit was advanced enough that it could recognize a car seat or dashboard and place things automagically.
[1] https://developer.apple.com/documentation/arkit/scanning_and...
I haven’t gotten into the groove with it for 3d workflows. I probably need to do some more tutorials for it on that front.
But that would be the general idea. Scan interior - 3d model parts to fit in that scanned world - 3d print/CNC cut those parts to fit.
• Everybody is connected to the same "AR space" and sees the same AR objects.
• Different people may have different view/interact access to different objects.
• Some form of tactile feedback is assumed. In Apple's case we will assume this to be provided via the Apple Watch or a hypothetical AR gloves accessory.
• Many things that are currently physical could be replaced with virtual versions: screens, keyboards, signage, fashion, pets.
Many developers have already noticed the gradual introduction of APIs and hardware like LIDAR that are so obviously meant to support a future glasses product, but I wonder if anyone's connected the dots between features like the spatial audio recently introduced to AirPods.
Apple's AR strategy is very likely going to be divided across their different products: iPhone (processing), Glasses (display), Watch (motion control and tactile feedback), and of course AirPods. Many of their users already own most of those devices. No other company is so perfectly positioned to finally deliver ubiquitous AR.
As with every technology we have right now it will be used against you. It doesn't have to be that way, but that's the world we live in now.
Perhaps I'm still too new to AR Kit, but to me it doesn't look entirely straightforward (more backend lifting) to do someone of the things you are mentioning. I hope Apple releases some updated Geolocation API's to better enable the experiences you mention.
Kai Faust's [1] work with Infopop [2] is definitely worth playing around with
[1]: (https://twitter.com/kaifaust)
- Azure Spatial Anchors (iOS, Android, HoloLens, ROS) [1]
- Google Cloud Anchors (iOS, Android) [2]
- Apple Geo Anchors (iOS & not all locations so not ready for use yet) [3]
Integration with these SDKs is pretty easy, and Azure Spatial Anchors and Google Cloud Anchors are production-ready. Minecraft Earth for example used Azure Spatial Anchors for shared multiplayer AR experiences. Infopop is nice, but doesn't have multiplayer or long-term persistence, so not really useful as-is. But wouldn't be surprised if it eventually uses one of these SDKs to provide better multiplayer + shared relocalization. You could probably throw together a rough version of Infopop in a week if sufficiently motivated, ARKit does a lot of the heavy lifting there.
[1]: (https://docs.microsoft.com/en-us/azure/spatial-anchors/overv...)
[2]: (https://developers.google.com/ar/develop/java/cloud-anchors/...)
[3]: (https://developer.apple.com/documentation/arkit/argeoanchor)
https://www.penguinrandomhouse.com/books/59659/virtual-light...
Impressive how well that works.
I don’t necessarily frequent specific communities too much, more just try to find stuff via googling about various interests and projects, and talking to friends and colleagues.