"This is entirely server-side UA sniffing going wrong. You get an empty HTML doc, only a doctype, with a Firefox Android UA. You can reproduce this with curl,
$ curl -H "User-Agent: Mozilla/5.0 (Android 10; Mobile; rv:123.0) Gecko/123.0 Firefox/123.0" https://www.google.com
<!DOCTYPE html>%
and it seems that this affects all UA strings with versions >= 65. <=64 work."https://camo.githubusercontent.com/16a554571b3cea973d2b73464...
In IOS 14 ‘ login’ button simply does nothing regardless of the browser.
It’s been reported and yet last time I’ve checked it doesn’t work. It’s more than two months already and as it seems simply no body give a sh….
It's not amateurish to have problems. It is amateurish (or malicious) to have problems caused by this specific class of issue.
Disclosure: I work at Google but not on this. This was linked from the Bugzilla bug.
There's a feature important to Middle Manager 32456. You can't just revert it for a not-Google browser. That's just a no-go.
So a fix needs to be developed, QA'd, rolled. I presume it's going to be an out-of-schedule roll so it'll probably involve some bureaucracy. (One of my clients is a few thousand people public company and even there a hotfix requires QA lead approval.)
Nothing ever is simple.
OTOH, it's amazing that apparently they don't have UI tests for FF mobile.
From Google's standpoint, this issue may be considered a "new" bug, which means they need to conduct an investigation and address it. Consequently, a rollback is not a viable solution for them.
If you want to say what you think is important about an article, that's fine, but do it by adding a comment to the thread. Then your view will be on a level playing field with everyone else's: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
(Submitted title was "Google breaks search for Firefox users because of bad UA string sniffing")
This keeps happening over and over for Google at a rate far in excess of other major tech companies so we know they don’t make an effort to test with Firefox, and it’s why I’d like regulators to force anyone making a browser used by more than, say, 100k people to be obligated to test in other browsers on every site their larger organization operates and have a maximum time to patch any compatibility or performance issue which isn’t due to a browser not implementing a W3C standard. The goal would be to cover things like YouTube using the Chrome web component API for ages after the standard shipped and serving a slow polyfill to non-Chrome browsers.
If it looks like a banana, shaped like a banana, feels like a banana, it’s probably a banana (or a plantain).
> "Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?"
> "I'm all for 'don't attribute to malice what can be explained by incompetence' but I don't believe Google is that incompetent. I think they were running out the clock. We lost users during every oops. And we spent effort and frustration every clock tick on that instead of improving our product. We got outfoxed for a while and by the time we started calling it what it was, a lot of damage had been done," Nightingale said.
I tend to believe him
Not to mention the fact that this is google.com, not some wild feat of frontend engineering. They shouldn't be serving any JS that wouldn't run on IE, forget anything close to a modern browser.
This is entirely server-side UA sniffing going wrong. You get an empty HTML doc, only a doctype, with a Firefox Android UA. You can reproduce this with curl and it seems that this affects all UA strings with versions >= 65. <=64 work.
> Warning: A number of older versions of browsers including Chrome, Safari, and UC browser are incompatible with the new None attribute and may ignore or restrict the cookie.
This caveat meant that UA sniffing was the only reliable means of setting cookies that were guaranteed to work properly.
This had nothing to do with feature detection, so the usual suggestions simply won’t work.
You don't get that sort of speed without heavy optimisation. In fact I'd be amazed if www.google.com isn't the most heavily optimised page on the internet. So it's not at all surprising to me that they optimise based on what browser is asking.
basically you can send slightly less data by only sending the relevant layout/js code etc. with the first request instead of sending some variation which then the browser selects/branches one
most likely a terrible optimization for most sides maybe not google search due to how much it's called
but then Google is (as far as I remember) also the company which has tried to push the removal of UA strings (by fixating all UA strings to the same string, or at least a small set of strings)... so quite ironic
But then you will also be incompetent to do it right with UA sniffing, which is even harder and require more maintenance to keep the list up-to-date.
That's the obtuse thought process on how you get the garbage google just showed us.
UA sniffing should be a last resort but there have been times where browsers have claimed to support something but had hard to detect bugs and the cleanest way to handle it was to pretend the feature detection failed on those older browsers.
This is increasingly rare – the last time I had to do it was for Internet Explorer – but I would not take a bet that nobody will ever have a need for it again.
As an example, I recently had to deal with an exception to a TLS 1.2 requirement because while almost all of our traffic is from modern user agents, there was a group of blind users with braille displays using Android KitKat which defaults to only supporting a maximum of TLS 1.1 and “go buy an expensive new device” isn’t an appropriate response even though you might be entirely comfortable saying that random internet crawlers getting an error is okay.
Just an evil company really.
However wanted to screw Firefox users the same way as the YouTube slowdown.
There's a silver lining though: it really doesn't matter if it's due to negligence or malicious sabotage, and if they (where "they" is hypothetical leadership within Google that is trying to eliminate Firefox) thought that pleading negligence instead of sabotage was going to be a good defense, we'll see how that holds up to future scrutiny and regulation. I mean hey, Microsoft tried to pull a lot of shit too, and they were probably more competent at it if the Epic Games lawsuit is any indicator of Google's competence right now.
Google is Firefox' primary source of income, and Firefox is strategically crucial token competition that stands between Google and an antitrust lawsuit of the sort Microsoft faced in the early 2000s (where Microsoft were basically a technicality away from being broken up).
I suspect that these problems are more to do with neglecting testing, and just not caring very much about non-Chromium browsers
(Disclosure: I'm a long-time Firefox user)
Not saying that they shouldn't have detected this (i.e.: where is the automated testing for those things?), but I don't think this is Google screwing up Firefox on purpose.
And if Google really wanted to screw up Firefox, they would probably do a better job than User-Agent sniffing. Firefox already has an internal list where it applies fixes for websites (e.g.: setting a custom User-Agent), and the bug report actually comes from it. You can see it by going to `about:compat` page in Firefox.
They didn't test on it at all. It's a browser with sub-5% market share.
There are folks in Google who personally test on firefox and believe in it ideologically as an important peace of the web ecosystem, but that's not a company policy and detecting Firefox bugs does not gate feature release for most projects.
I really feel like HNers often forget how minuscule the scale of their usage patterns are.
It's fascinating how people clamor for more government regulation and control at the first sign of trouble, without reasoning about the implications of those things, despite a history replete with examples of power abuse from the same.
If the laws were still being enforced, a company that has a very large market share in web browsers, and a very large market share in search, would have to be extremely careful not to "accidentally" break their competitors' products.
Count me in for the destruction of the do-be-evil giant, but we're far from it not being absolutely dominant in the space.
I didn't even know what Kagi was.
It has one killer feature: you can block Pinterest from spamming your results.
Edit: I'm not saying tesla is building a street view clone. I can't imagine they have the bandwidth for that. But the cars recognize construction, speed cameras, police cars, red lights, red light cameras, stop signs and read speed limit signs, which all gets sent back up in real time. They might have to pay to augment their traffic data until enough people without teslas start using the app.
Just because a few techy or aware people use Kagi (or another alternative), is still a drop in the bucket of overall search engine choice
Google search is still widely dominant, as much as we might not want it to be.
At the time netscape and microsoft were giving out free browsers while fighting for control of the profitable httpd server market, and blocking competitor browsers "for security" or something else was the play book of the day.
Is it just me, or is this absolutely insane? When Google ships a bug, it's suddenly the responsibility of browser vendors to "fix" it at the browser level?
It happens all the time, you probably just don't realize it. There is special code in Windows for supporting/un-breaking popular applications, same with Android and iOS.
IIUC "Interventions" are typically injected scripts or styles and "SmartBlock Fixes" are exceptions to tracker blocking.
Here's Raymond Chen sharing some of the worst examples from the 90s
https://ptgmedia.pearsoncmg.com/images/9780321440303/samplec...
Google has marginal incentive to not kill Firefox (antitrust), they have no incentive to make sure they provide a good experience on Firefox, let alone test with it.
The issue seems to be isolated to the search home page, and Google Search hasn't exactly been associated with "quality" in several years. Internal rot and disinterest gradually chip away at QA.
Not explicitly anyway. Google's testing strategy is as it has always been: do some in-house burn-in testing, then launch to a small number of users and check for unexpected failures, then widen the launch window, then launch to 100%.
In this case, the user base in question is so miniscule that no bugs probably showed up at the 1%, 10%, and 50% launch gates.
at least not for firefox mobile
they probably test for chrome (desktop+mobile), safari (desktop+mobile), edge(desktop) and maybe Firefox (desktop) but probably no other browser
I never actually looked at what this does.
https://www.google.com/search?q=what+is+my+user+agent+string
Only breaking for new FF-versions will give the untechnical user the impression that FF broke through an upgrade.
So I call this another not so feeble, but very evil(yes I think we expect nothing less of them right now), attempt of Google go tighten the grip on the Browser-eco-system.
Not the least that it only impacts FF on Android.
On Android the means to inspect are limited without a computer(so they might hope for users to switch because they need access and then forget to switch back). That is also, to me, a sign that it was intentional, just because how specific, browser-centric this is.
I feel I should note I have this in a user script:
document.getElementsByClassName('b_logoArea')[0].href = "https://google.com/search?"+document.URL.match(/q\=[^&]*/);
It makes it so that clicking the bing logo sends the query to google, perhaps it doesn't like getting queried directly without being the default search engine?Probably the backend sent the initial headers and start of the response to the load balancer, then stared rendering the page and crashed. Presumably `<!doctype html>` is hardcoded as it is always sent for HTML pages and maybe fills up one of the initial packets or something. The rest of the page may be rendered and return as separate chunks (or just one large content chunk).
I'm still using Firefox because I'm ideologically motivated to, but it's no wonder that so many users are dropping the fox for chromium browsers.
"Over and over. Oops. Another accident. We'll fix it soon. We want the same things. We're on the same team. There were dozens of oopses. Hundreds maybe?"
https://www.zdnet.com/article/former-mozilla-exec-google-has...
"The page is blank. The page is blank the page is blank."
For example if you see that errors are being thrown on your site and nearly all of them are the new release of Chrome then you can quickly narrow down it being a compatibility issue with the new version. Similarly if someone is making broken requests to your site at a high rate it may help you figure out who it is and contact them.
It can even be useful in automated use cases. If Firefox 27 has a bug it is reasonable to add very specific checks to work around it gracefully. When possible feature detection is better, but some bugs can be very hard to test for.
The problem comes from:
1. People whitelisting user agents.
2. Adding wide sweeping checks, especially if you don't constantly re-evaluate them (For example Brave versions later than 32)
In this case it seems that it is hard to actually conclude that a user-agent check was bad here. Google could have been trying to serve something that would work better on Firefox (maybe it worked around a bug). Of course that code crashed and burned, but that doesn't mean that the check was a bad idea.
I agree that user-agents are often used for evil, but they can also be used for good. It is hard to justify completely removing them. With great power comes great responsibility. Unfortunately lots of people aren't responsible.
If people don't want to pay for Kagi, then use Brave Search. DuckDuckGo was really gone downhill, so it's hard to recommend that.
I've seen this a lot lately, but no one has said why.
I solely use DDG and have done so for a long time. I have not noticed any specific changes, nor any degradation in my search results. I don't pay much attention to announcements or anything, so maybe I missed something?
Can you or someone please tell me how/why DDG is suddenly not recommended and "gone downhill"?
Switched to DDG ages ago, and even though it's not amazing, it's already better than Google.
Although, if we're honest, searching these days is absolutely atrocious everywhere. Any keyword search will only net you ads or "top X" articles full of SEO garbage most likely written by AI at this point...
I used to advocate for DDG, but they've gone so far downhill that I rarely waste my time with it. They've decided to go PG-rated everywhere even if you have safe search turned off. This is a problem because, even if they just want to block porn or illegal things, this can have an impact on legitimate academic research. And yeah, SEO trash floats to the surface, though they aren't entirely in control of this.
Kagi (which relies on Brave Search instead of Bing) has less flotsam to start with, provides lots of great tools to filter out the garbage, and is subscriber-driven instead of advertiser or investor driven.
DDG had potential, but they decided to give into moral panic and coast on their modest success instead of making substantial improvements. They also decided to pull a Mozilla and spend unnecessary effort working on a browser when they should be focusing on their core product. It'd be one thing to make a browser if their core product was actually good, but the best they are offering is a pinky-swear that your searches are private.
This is misleading. Kagi uses multiple external sources including Google and Brave as well as their own internal indexes
https://help.kagi.com/kagi/search-details/search-sources.htm...
DDG works well enough for me to be my daily driver, but it's absolute garbage for any kind of news search. Bing prioritizes MSN repost spam from dubious sources, so I'm better off going just directly to news sites I trust.
Then you can "w chemistry" to open Wikipedia with a list of suggestions from the same Wikipedia ("Chemistry (Girls Aloud album)") in the same place
I also use Kagi lenses and it's been good, though for me the killer Kagi feature is being able to uprank/downrank/pin/block domains. Such an obvious and simple feature, such a powerful effect.
There is a psychological barrier to overcome in paying for searches, but once you get past that, Kagi makes a ton of sense. I suspect the reason Google never implemented personalized search results (like Kagi's where you can uprank/downrank domains) is because it's not about what you see, it's about what they show you. i.e. ads.
Also, YouTube is still the best video site despite going full mask-off on adblockers.
Then set up a gmail forwarding rule.
The full documentation is here:
https://www.fastmail.help/hc/en-us/articles/360058752414-Mig...
I honestly spent more time trying to wrangle the data in all of the other numerous Google services those accounts had data with.
Apple maps is Yelp spam.
What should I be doing? I was doing research
I do say this as somewhat of a joke, but that's for me a real reason. It's not the first time Google has intentionally screwed over Firefox users.
Even if we believe what they say every time it happens: it's "just a bug". For me it clearly signals they simply don't care. And that's the best case scenario. Worst case they're abusing their market position to drive out competition.
Even if DDG serves requests from MS servers, that's not even close to opting into the Google surveillance machine.
Finally, if what you said holds true about serving basically the same content, then the hostility against Firefox and the fact that you won't be helping this monopoly should already make it "better".
> Since this has now been posted to HN, I'll be locking this thread. This bugtracker is a work place, not a discussion forum.
We have a bad fame.
The respectful users will look at the thread and not comment to not derail the discussion. The fraction of a percent which may have something relevant to contribute probably already did or will find another way to do so if it’s important.
The remaining users will post low quality comments for a while then leave forever. Like a flash mob that invades your home, parties for a few hours, then leaves a mess for the regulars to clean up.
Can’t blame the owner for preemptively locking the door.
that's actually an excellent way to frame what a bugtracker is actually for.
it is about workers getting shit done and its optimized for the people who work the issues.
it isn't optimized for the people posting comments on issues.
and the fact that a lot of users that have never had to deal with fixing software bugs think it looks like a forum website would explain the impedance mismatch that you often see.
The above phrase also explains why Social Media is not a bug-tracker. ^_^
I don't blame them for the lock.
Well, tradeoffs and whatnot with hosting your bugtracker on the most open code platform on the Web.
“Conversation limited to contributors until popular site calms down”But not one undeserved.
HN is too orange.
Of course outrageous GitHub issues get low quality comments from HN.
The guy probably reacted like this because of the dummy comment he marked as off topic and probably made the connection.
> 'now you have two problems' b-bazinga. Software engineering is hard.
He might have said the same of Reddit, or any social network. It's pretty reasonable and I think he made the right call.