That's...not great? For comparison, ARKit on iOS is going to support 400 million devices at launch (very rough numbers: ARKit runs on any new iPhone Apple's released over the past two years - iPhone 6S/SE/7 - and they sell over 200 million a year). Hardware fragmentation is a tough problem to solve.
> With more than two billion active devices, Android is the largest mobile platform in the world.
So they'll get it onto 5% of active devices, or one in every twenty. Not great, perhaps not even good.
I've not been a fan of android fragmentation for awhile now, and have been surprised that Google hasn't been able to do more to attack the issue. Even when Treble launched, I asked myself... is that all? Rock and hard place I guess.
I actively avoid android devices because of the issue. My last android device was a tablet in 2011.
Therefore it's fine that they get only 5% to start. I assume it's going to turn into 10%, then 20% quickly. For some perspective the iPhone only has about 15% market share globally.
Curious, what exactly were you expecting? There hasn't even been enough time for Treble to play out to see if it brings results.
Yeah and fragmentation in Android versions is pretty bad. Also the quality of the underlying hardware might be not as consistent as in iOS devices. It relies on a lot of sensors to get it right, not just raw CPU power and a nice camera.
The Android CDD maintains requirements for sensor accuracy and performance: https://source.android.com/compatibility/android-cdd#7_3_sen...
Treble could have been the solution, yet they are still allowing for OEM customizations and the OEM are the ones expected to keep pushing the updates, as discussed on the ADB Podcast.
Guess how many OEMs will choose to push Treble updates instead of selling new devices.
Stop selling low spec phones?
of course it isn't gonna run in a supported configuration on some dumb phone from 5 years ago. those people can't afford or don't care about VR.
Apple has significantly easier job here - especially since it doesn't need to get OEM buy-in for these kind of things :/
first, 100MM or even 400MM devices is no fucking joke...
but Q is: bandwidth to deliver drivel to the devices... not the apps - but how much bandwidth will ads consume over consumed content.
If I were a VC, I would be purchasing every-single-pipe-in-existence for the future.
We went from pipes-are-pope to content-is-king -- but the fact is that the control of information is in the plumbing ( at the high level ) -- not the holders of attention... they are just firewalls.
Facebook is a firewall.
Google is a firewall.
Reddit is a honeypot.
etc...
you get the analogy.
so...
Who owns the pipe.
ISPs were vilified... FFs the NSA sent the head of Qwest to prison for not spying... the scientologists refuted the NSA on carnivore (because they were already monitoring the networks for clams) -- the .gov went after the guy who detailed all fibers.. -- how many more examples would you like...
Can you explain your last paragraph a bit? It's a bit telegraphese and I'd like to know to what individual things you are referring. Thanks!
This is no different. The Android development process is painful (the most verbose, cruft and boilerplate filled Java), cumbersome to organize and build (Gradle is terrible, and buggy) and debug (the integration with Studio is just clunky). About the only thing Google does better is testing releases through the developer console.
It's nice to see them finally providing something similar to ARKit. I just wish they'd work on all the other things that make Android development a horrible experience.
Developing for iOS/Swift is a more pleasant experience, the APIs are simpler and more well-thought. But it's not a huge difference.
Android Studio is a better (but uglier) IDE, but not by as much as people say.
Android's Activity and Fragment APIs is a horrible joke. I mean, Google is supposed to have these super smart devs, how could they design this mess?
Creating UI in iOS sucks. Specifically, Auto Layout and Interface Builder. I usually use Fb's Yoga, it's not ideal, but less of a pain.
Swift is one of the best-designed languages out there.
As for your comment about Activity/Fragments, they've had a few I/O talks going over their thoughts:
That's a pretty big price they charge.
iOS:
+ Swift
+ Less screen sizes to worry about
+ Less iOS versions to worry about
+ XCode is much lighter on resources
- Mac only
- XCode might crash every now and then
- Probably need an iOS device, as the simulator is very slow
- $100/yr developer fee
Android:
+ Kotlin
+ Studio runs everywhere
+ No developer fee
+ More stable IDE
+ Decent emulation
- Countless screen sizes to worry about
- Lots of Android versions to worry about
- $25 developer account fee
- Android Studio is resource heavy
- NDK requires lots of JNI boilerplate
Though, with all of this AR stuff, I'd just go the Unity/Unreal route, as it will probably be very game-y and such.
Only if you're doing OpenGL, as that's actually rendered in software (not sure about if you're using Metal). Otherwise, the simulator (not emulator!) is fast, which is as it should be as it's running native code (which is why it's a simulator, not an emulator). If anything, it's important to test on devices because the simulator can mask performance problems (though as devices become more and more powerful that becomes less of an issue).
https://rustyshelf.org/2014/07/08/the-android-screen-fragmen...
I think you are taking it the wrong way and you are not at fault, there is a lot of FUD about screen sizes.
Designing for flexible screen sizes and densities is pretty easy and I don't think I would gain any significant amount of time if Android was limited to 10 screen sizes.
You just think in term of dimension independant pixels & inflexion points where you adapt your design (one more row, multiple panes, etc)
- Developer fee of $25 if you plan to actually publish to the store
- Needs a quadcore Xeon with at least 16 GB and SSD to have an usable experience with Android Studio, or configure it to run in laptop mode
- NDK requires lots of JNI boilerplate to call about 80% of Android APIs
You are confusing VR and AR. AR has a ton of legitimate use cases outside gaming:
- https://storify.com/lukew/what-would-augment-reality
- http://www.madewitharkit.com/ideas and their twitter https://twitter.com/madewitharkit
Xcode is the most "diff" IDE from other IDEs. I like Swift but just dont like Objective-C. The build tools and ecosystem is too tightly tied (I like to switch between development machines without having to always be on a mac).
Java is definitely painful, but I suppose the bias I have here is that I have developed on it for several years.
The breath of fresh air so far has been React-native and i wish more things get ported over to JS (or like Expo kit).
I find it curious you bring up Expo as i've found that is the most opaque and user unfriendly IDE i've used in some time; i don't get why they don't just leverage VSCode and quality tooling over Yet Another Goddamn IDE.
I've only been doing Android for 9 years. I've seen the development tools evolve and improved the past few years.
But what do I know. You're probably the expert here.
Kotlin may be great. I haven't used it much, but so far I'm finding it can't hide Android's bloated, over-engineered substructure.
Instant Run was buggy and made builds a lot slower, ironically the last time I tried it. That was after them saying that they fixed a bunch of issues so we should give it another chance.
I've been doing Android since 1.0 and started with iOS/Swift half year ago. I think the iOS platform is nicer, simpler, more thought out and overall a better experience, but not by a huge margin. Haven't used Kotlin yet though.
One area where iOS sucks is creating and (sometimes) working with UI.
Was it really necessary?
I'd compare developing on Android very unfavorably to iOS. All the points the OP made are accurate, in my experience. Every time I needed to dive into Android native code, Layout Inflation, etc, I find it to be a crufty and unpleasant system. And gradle is really a pain to use - tons of edge cases and unhelpful error messages. Add to that, you need to support like ten thousand devices, many of which are running Android 4.4 (which, IIRC, is like three years old) and have a WIDE range of screen sizes.
Compare that to iOS development, and the differences seem obvious and apparent to me.
So why don't we actually discuss the product instead of finding a way to shoehorn in an unrelated topic to complain about?
Even if you think it resonates because people are just drinking the Apple kool-aid, that's still Google's problem.
Let Google handle its own problems, and use whatever vote power you have to help steer the conversation here. Complaining about it doesn't exactly reduce the signal to noise ratio.
And I was talking about the product. ARcore is a single example of the larger, broader problems I "complained" about.
Even worse is you're describing the improved Android development experience.
ARKit isn't even out yet though...
Android's mess of Java cruft may be overkill, but it's at least well-documented text-file-configurable and can 100% be maintained without ever installing Studio.
It appears the ARCore API is well designed and 1-1 feature equivalent to ARKit, i.e. VIO + plane estimation + ambient light estimation. The API's even share a lot of names, e.g. Anchor, HitTest, PointCloud, LightEstimate.
Now that stable positional tracking is an OS-level feature on mobile, whole sets of AR techniques are unlocked. At Abound Labs (http://aboundlabs.com), we've been solving dense 3D reconstruction. Other open problems that can be tackled now include: large-scale SLAM, collaborative mapping, semantic scene understanding, dynamic reconstruction.
With Qualcomm's new active depth sensing module, and Apple's PrimeSense waiting in the wings (7 yrs old, and still the best depth camera), the mobile AR field should become very exciting, very fast.
Also, when Clay Bavor was talking about Tango supported devices he remarked that the devices were getting smaller and smaller then implied it was coming to smaller, more traditional devices. I took this to mean they were close to getting the sensors ready for wide deployment but I suppose this could have just meant they were ditching the sensors because they felt the software was good enough.
I'm kind of disappointed. I'd hoped that he was saying that Tango sensors would show up on Pixel 2 (which was a long shot, from the leaked photos not really showing the many sensors you see on current Tango devices). Instead we have what feels like a rushed out me-too to match ARKit.
I hope tango will be continued to be developed. It is more robust for position tracking, and can do 3d scanning, etc.
Very badly. I'm very disappointed by Tango in this regard.
http://www.androidauthority.com/google-tango-branding-retire...
But if they're changing everything over to ARCore then as a consumer how am I supposed to differentiate the phone that supports the software approach from a phone that supports the extra hardware sensors without digging into the specs? There doesn't seem to be a specific label for that.
There were quite a few of them already.
Hence why they're probably switching to the software-only approach where device support can be added more easily.
Google really has nothing to lose by following iOS lead, it's good that they "gave up" on Tango and decided to follow ARKit because that means Google is not trying to beat iOS with Android, but trying to commoditize iOS.
You really can't beat Apple at its own game, it's best to let go of that foolish goal and focus on trying to nullify whatever leverage Apple has with their few years lead.
Sure ARCore won't be installed on a lot of devices now, but in a couple of years they probably will (This is not the same as the Android ecosystem currently being fragmented because AR provides an entirely new type of UX and will be significant enough for people to get a new phone), and as long as Android gets there Google will have achieved its goal--commoditize AR.
In the end, Apple will have made tons of money with their iDevices, Google will NOT have, but they will have gained enough AR user-base that they can use it as their leverage, everybody wins.
It's actually impressive that Google is able to change direction and and get this software-only AR out the door so quickly to compete with Apple, but they still don't want to admit that's what they're doing.
https://techcrunch.com/2017/08/29/google-retires-the-tango-b...
Apple's purchase of MetaIO and its focus on just SLAM is really the right way to go. Maybe improve it a bit via specialized hardware when available (progressive enhancement in a way), but at least start with SLAM.
Google didn't have to be behind on AR at this point in time if they had ditched the focus on Tango hardware and instead focused on SLAM.
But that is water under the bridge, Google is now on the right track after being forced to do so by Apple.
Including depth sensing HW was the right solution, but Google doesn't have its own popular smartphone as a forcing function. I predict eventually Apple will include a depth camera, or they'll use dual-cameras to try and synthesize it, and once that happens, then all Android manufacturers will follow suite.
If AR is to be useful, it's got to be a lot better at tracking and drift, at making sense of the world, of supporting occlusion and mapping.
Google had been dead set on pushing Tango hardware to OEMs in the hopes that they would be able to lower BOM on the hardware. Everyone in who has been in AR long enough knew that wasn't going to happen and that monocular SLAM in software was the way forward on mobile.
The key thing now for AR devs is that they will have fairly comparable monoSLAM capabilities available on both Android and iOS for their apps.
HOWEVER that just means that the tracking portion of the equation is solved for developers. A few years ago it was possible to make a cross platform monoSLAM app if you used a handful of tools like Kudan or Metaio. Obviously ARKit and ARCore are going to be more robust with better longevity, however the failure of uptake of AR apps was not because of poor tracking, it was because there is an inherent lack of stickiness with AR use cases on mobile. That is, they are good for short infrequent interactions, but rarely will you need to use the SLAM capabilities of an AR app everyday or even multiple times a week.
This is why I am so invested in WebAR, because you can deploy an AR capability outside of a native app and the infrequent use means it can have longevity and a wider variety of users.
Yes, for those apps that people use all the time it will be very valuable, but if you look at the daily driver apps like FB, IG, Snap etc... they are already building the AR ecosystems into their own SLAM. All this does is lower overhead for them. For the average developer it doesn't solve the biggest problems in AR.
Kudos to Google, but developers need to really understand the AR use cases, implementations and UX if they want to use these to good effect.
[2] https://www.blog.google/products/google-vr/dance-tonite-ever...
Remember early 3D in the 90s? We have S3 Virge VX, Voodoo 3dfx, PowerVR, Rendition Verite, Matrox, TNT, etc They had a huge disparity in capabilities, fillrates, APIs, most didn't support OpenGL, even 3dfx -- the card closest to what games settled on as a minimum set of functionality, only supported Carmack's miniGL. Early DirectDraw and Direct3D were horrendous and to get performance, Games had to be ported to each card's proprietary APIs, and effectively, Quake and Unreal became the Unity of their day, offering a higher level abstraction to building cross platform titles until the cards all converged on OpenGL.
And converge they did. Eventually most cards offered similar fillrate, multitexturing, and fixed pipeline options, the market settled on a common hardware featureset, and then competed on price and performance.
Later, programmable shaders disrupted the market again, and we went though iterations of pixel/vertex shaders from 1.0/1.1/1.2/1.3/1.4 to 2.0 to 3.0 and then GLSL and finally something like CUDA.
I think we're going to see the same thing happen in mobile and whatever fanboys propose as some kind of insurmountable advantage will turn out to get commodified if it becomes successful. For example, if AR takes off, or if Apple adds a depth sensor and Tango-like functionality takes off and a huge startup market and VC funding coalesces around it, then roughly 1-2 years later, every Asian OEM will have Android devices with depth cameras and similar functionality.
The only reason for the discrepancy today is the hardware fragmentation. But the market follows the money and abhors a vacuum. Hardware convergence in capabilities always follows, and eventually developers end up with middleware to address it.
This does lead to "IOS first" for startups, but if you look at the App Store and Play Store today, practically every major game and app you want is available on both platforms. It'll take years for this to shake out, but if AR becomes huge, smartphones in 5 years will all have roughly a similar set of features.
P.S. My own opinion is that phone's viewport is too small for a great AR experience. It's a nice initial experience and visually impressive, but will quickly become tiring. The long term form of this has to be some form of glasses, because waving around a phone in all directions and holding it in midair while touching the UI is kind of awkward.
I'm 100% certain that's what Apple is preparing for. AR in a phone is a neat toy, a gimmick. The most perfect AR toolkit ever made still won't change the fact that you're holding a phone in your hand, interacting with it through a screen, etc.
Obviously if they built using Unity it should be a simpler port, but clearly not all of them do.
You are either confusing VR and AR or you have about zero imagination. AR has a ton of legitimate use cases outside gaming:
- https://storify.com/lukew/what-would-augment-reality
- http://www.madewitharkit.com/ideas and their twitter https://twitter.com/madewitharkit
Do you mean Tensorflow Lite? That's not part of Oreo, so shouldn't be something people have to wait 2 years for.
IMO AR in smartphones and tablets is a fad that in 2 years nobody will care about. Remember all those gyroscope/accelerometer based games? Yeah me neither.
Maybe AR will be awesome when someone (Apple? Microsoft?) releases a pair of lightweight glasses that can produce stereoscopic images superimposed seamlessly over reality, but we are still very, very far away from that.
So if one needs to evolve AR hardware from phones to glasses, then putting it on the phones is a prudent next step, isn't it?
AR isn't just a new interface with the user, but also a new interface with the visual environment around the user. This has many more degrees of usefulness than an accelerometer.
Time will tell, but if we are let's say 10 years away from good AR glasses what difference does it make if smartphones of today can display AR content?
Obviously Apple (and now Google) are fighting in the marketing space, not technical one.
In truth the problem is really hardware not software.
2. If you think that people will opt for glasses when they don't need them, and they have smartphones with them all the time, you're gravely mistaken (there are also physics which make "lightweight glasses with images superimposed in real time blah blah blah" an impossibility)
Could you elaborate here? I'd like to understand the physical limitations.
How long until they update the ChromiumAR project with support for ARCore and when will that preview and then be available? I know that tons of people are waiting on that:
Google has already abstracted it to WebAR on both iOS (via ARKit) and Andriod (via ARCore) here:
https://developers.google.com/ar/develop/web/getting-started
And of course Unity and Unreal Engine will also act as abstraction engines for native as they tend to do.
The kit works on Unity and supports both Android and iOS, using the right library on the right phone.
Vuforia's mobile kit for iOS/Android/desktop and has been around since Qualcomm made it starting 2012ish. Though they do not have the point cloud tech and ambient lighting tech as far that make ARKit/ARCore a leap. Vuforia will probably not be able to keep up in the long run but is widely available today because it is hardware independent.
With ARKit/ARCore hardware requirements it will take a few years before there is a large enough market to make a mainstream app not just a showcase AR app/game. For this to truly be something that you can get into games that are cross platform, it will take an independent third party like Unity/Unreal/Vuforia/metaio (before Apple bought them) etc to make it mainstream, or an app architecture that can switch out AR depending on device/capability.
I have launched lots of AR apps, mainly games for kids, using Vuforia, OpenCV, metaio and some other kits, primarily where the AR is an extra feature where people play the game on their desk or unlocks from AR targets/trackers on products. Currently it is a nice gimmick without real-world awareness but great for games.
ARKit/ARCore are better than Vuforia but still locked to a platform which will bring challenges until it is abstracted into a common feature to use in Unity/Unreal/WebGL etc. Exciting progress on AR though between Apple and Google and competition like this always benefits.
I'd love to see an updated one that compared the two directly.
Anybody knows where i can find more apk sample apps to test?
Supports Unity, and works on both iOS and Android out of the box. (I'm not affiliated, just a supporter.)
What are the useful applications for AR outside of verticals?
I've not seen anything compelling in the phone only incarnation.
The headsets have a lot of engineering issues ie many years to overcome.
Even with headsets its unclear the value of adding the visual clutter and noise that most ambient/immersive computing demonstrations seem to assume.
Whatever value you can add generally requires constant headset wear for it to be ready to hand. This puts even harder engineering problems on the industry as it forces super light and easy headsets (google glass was not AR nor a technical path to it).
Not seeing it yet.
There are tons of use cases for AR:
- https://storify.com/lukew/what-would-augment-reality
- http://www.madewitharkit.com/ideas and their twitter https://twitter.com/madewitharkit
Headsets have a lot to do with AR, which is why HoloLens is a headset; headsets let it be handsfree, use the entire circumambient space, and provide an important and intuitive control for the portion of reality the user is interested in having augmented.
They also allow for stereoscopic 3d, which is useful, though not always essential, for AR.
Can't see any of them being worth putting a headset on.
Can't see any of them being worth launching an app on a phone to stare through a camera at.
Some of them need some pretty next level ai.
Is that a limitation of ARKit too?
What would it take to make it "real 3D"?
I think you mean “inferred” rather than “seen”, if it is an assumption based on avoidance, and there are other explanations; while HoloLens is better equipped than phone-holder software AR to avoid this, the one time I did get to use one there were some glitches when the “augmentation” should be obscured y the “reality”. If ARCore handles that, in principle, but is currently annoyingly glitchy in practice on its current preview-quality state, you might reasonably avoid it in demos.
Apple has very tight control over their components so they can do this but managing this across a million OEMs and device models (as it is with the Android ecosystem) is close to impossible.
Tango tried to solve the problem by specifying out a software and hardware stack for OEMs to use but now it looks like Google is just too jealous to let have Apple have a good time with ARkit, therefore the "me too".
Why is it Google "me too"? Tango was released in 2014.The basic plane detection functionality that's in Tango is derived from the same mechanism that Apple uses. Facebook released an ARKit-like library at their conference before ARKit was even announced.
When Apple is late to the party, it seems people say "it doesn't matter if you're first, Apple waits till its 'ready'", but when Apple is perceived to have done something first, suddenly everyone accuses Apple's competitors of being thieves and copying.
Generally people say that in response to everyone accusing Apple of copying.
Anyone else here got it?!
From seeing ARKit examples that people have posted to Twitter the thing that has impressed me the most is the ability of ARKit to track your position even if you turn around and walk around, even all around in the office. I hope Google's version can do that as well because it seems like it would enable some really fun activities.
Doesn't look like it, uh?
> Build and increment to 0.1.1
> jsantell committed 26 minutes ago (failed)
....
> Fix linting
> jsantell committed 24 minutes ago (success)
edit: aww come on folks it's all in good fun