UA overrides are not the only tool in our toolbox. They're deployed in concert with outreach from webcompat.
UA overrides are a way of breaking a cycle, not preserving it.
They make it feasible for Japanese users to use Firefox without manually messing around with UA overrides (e.g., using Phony). And that gives us leverage. And with leverage our excellent webcompat team can get sites to make changes.
[1]: http://stackoverflow.com/a/31279980
[2]: http://blogs.windows.com/msedgedev/2015/06/17/building-a-mor...
I used to advocate using something like Modernizr and then writing your code against the results, but now I think an even more straightforward approach is to just do the feature detection you wish to use directly in your code. No sense in loading in a library and testing for things you don't need, still only to have those tests totally decoupled from the parts of your codebase that depend on the feature detection.
It makes more sense to do only the feature detection you need right in your codebase adjacent to the code which relies on the results.
I'm old enough to remember it was much earlier than that. In the early '00s there were already calls to do feature detection; jQuery was released in 2006 and it basically did it for you. In 2015, there is no excuse to do UA sniffing -- if anything, because we now have 20 years of case-history showing people will trivially spoof it.
While writing your own feature detection is still probably lightweight (although not by much if you customize modernizr correctly), you're not going to get all the edgecases Modernizr will unless you spend A LOT of time on it.
2 examples:
1) There's currently no way to reliable check if a browser support device orientation. Maybe because of bugs or incomplete implementations
2) It's impossible to tell when iOS 8.x has added the chrome around the page in landscape removing nearly 1/3rd of the entire view-able area on an iPhone5S.
https://github.com/Zarel/Pokemon-Showdown-Client/commit/30f2...
In Chrome, event.pageX/pageY refer to the position of the top left corner of the drag-preview-image, relative to the top left corner of the page.
In Safari, event.pageX/pageY refer to the position of the mouse curser relative to (0, window.innerHeight * 2 - window.outerHeight), a point slightly above the bottom left corner of the screen.
(Neither of these is spec, which as far as I know says that event.pageX/pageY should be the position of the mouse cursor, relative to the top left corner of the page.)
I eventually got around it by storing values from the `drop` event, but anyone who needed these values from the `dragend` event would be screwed.
My next most recent problem unsolvable by feature detection has to do with HTML5 Notification, whose API was massively changed recently. Attempting to use the new API on old versions of Chrome would cause the render process to crash (not just throw an error you could catch in a try-block, but actually crash like http://i.stack.imgur.com/DjdCX.png ). Of course, using the old API on new versions of Chrome would fail silently, so there was zero way to reliably deliver a notification to the latest version of Chrome without crashing older versions or using user agent detection:
User agent sniffing is just a bad practice. Everything about it is a hack. There's no right way to do something wrong.
The first is how many different ways sites are determining the browser corresponding to the user agent. You know that they're all based on "guess and test": come up with an idea for an if condition, open up the code in the 5 browsers you care about, see if the result works. It almost reminds me of the output from a fuzzer: it's a valid answer, but you can tell a random number generator and a lot of tries is what got you there.
The second is how many web developers seem fine writing the same website many times; once for mobile, once for IE, once for Chrome, once for Firefox. I've always taken the approach of doing exactly one site, and using whatever features are available in the worst browser the client wants supported. If extensive workarounds are needed to make the feature work in every browser, I say skip it. (I was always happy with what IE6 could do.) Of course, when I did web development, it was mostly boring corporate applications, not public websites that face pressure from competitors that are willing to write a codebase from scratch for every browser back to NSCA Mosaic. I consider myself very lucky.
The idea is that you're able to encode information about the execution environment into the type system, so that you can do things like write functions that depend on having access to GPS coordinates, or an audio output device, and so on.
The theory is that this will make it easier to target a wide variety of platforms, or devices which may restrict access to information through permissions systems, like those offered by Android or iOS. If, at compile time, you know you've handled all of the cases of an environmental constraint either being met, or not being met, then you have a much stronger guarantee that your code isn't going to unexpectedly fail spectacularly on a platform you haven't tested against.
Stop this feedback loop. The client is not responsible for the server's bugs.
The vendor has an obligation to create the best web experience for the user and while it’s certainly a sad state of affairs that not more sites are testing for features rather than the UA string, it’s a sad fact that not all web authors care enough about interoperability to remedy their bad ways.
I've used the detectmobilebrowsers.com as a baseline in the past, and only tweaked slightly so that fallbacks with "phone" will be xs (extra small), otherwise common mobile OSes would get "sm" (small), while desktops get "md" (medium) ... if you use adaptive CSS, and JS you can/should of course adapt if the pre-rendered environment doesn't match.
There are other modules written to predetermine a lot more, but it's all kind of a bit sad.