Currently, Chrome does the following:
(1) on each network change, send three DNS requests with random hostnames.
(1a) If at least two of the queries resolve to the same IP, store the IP as the "fake redirect address".
(2) on a user search, query the first search term as DNS.
(2a) If the query result is NXDOMAIN or matches the fake redirect address, do nothing. Otherwise, show the "local domain" hint.
Instead, it could do:
(1) on a user search, query the first search term as DNS.
(1a) if the query comes back with NXDOMAIN, don't show the hint and stop. We're done.
(2) otherwise, make two more DNS queries with random domain names to check for fake redirects.
(2a) if the two queries resolve to the same IP as the first one, we have a fake redirect. Don't do anything. Otherwise, show the "local domain" hint.
Results of step (2) could be cached until a network change.
This would only require 2 instead of 3 probe queries and only if the user actually searched for something and if the search term actually caused a DNS match (fake or genuine).
From reading the source, it actually does a HTTP HEAD chasing redirects, and records the origin of the final page, and uses that as the redirect address. So even if two hostnames yield different IPs, if they end up redirecting to same hostname, it will be detected
> (2a) if the two queries resolve to the same IP as the first one, we have a fake redirect. Don't do anything. Otherwise, show the "local domain" hint.
What if an ISP uses multiple IPs in the fake redirect, and alternates over those IPs in each successive response?
Good point. I was wondering how they'd deal with that in the actual implementation.
I think you got the answer though: They match HTTP origins instead of IP addresses - so I imagine, you could do the same in step 2: Do a HTTP HEAD query to the search word and two additional ones to random hostnames, following redirects. If the final origins are the same, there is fakery going on.
A problem with this could be unexpected HEAD requests to actual internal hosts: There is no guarantee an internal host that was never meant to receive HEAD requests would react gracefully or in any way predictable to one.
I'm not sure how they solve this currently. Maybe this could at least be mitigated by only sending the HEAD request to the search word host if there is reasonable suspicion requests are being redirected - e.g. only if the two random hosts resolved and were both redirected to the same origin.
Finally, you could cut all of this short by also connecting to (search word):443 and trying to initiate a TLS session handshake. If the host answers, you know it's probably a genuine internal host that talks HTTPS and you don't need to do any additional probes. (And you can also abort the handshake and don't ever need to send an actual HTTP request to the host)
Maybe, at the risk of over-engineering, additionally cache the results for the last N networks persistently. Something like (gateway, DNS, localip) as key. I could see those three being identical on different networks though... And assuming the article is right and most ISPs globally do not mess with NXDOMAIN, this might not be necessary anymore with this proposal.
- the queries only affect the time after which the "local domain" hint appears. They don't influence the time until the main search results appear.
- if the result is cached, the additional roundtrip is only for the first hostname entered after a network change.
- the two "random hostname" probes can be executed in parallel, so it should not result in more than one additional roundtrip.
And yes, luckily there is a policy to disable it: https://cloud.google.com/docs/chrome-enterprise/policies/?po...
Registry key: Software\Policies\Google\Chrome\DNSInterceptionChecksEnabled
PowerShell: Set-ItemProperty HKLM:\SOFTWARE\Policies\Google\Chrome -Name DNSInterceptionChecksEnabled -Value 0 -Type DWord
If you are managing Chrome via GPO, you should do it via GPO. Templates can be downloaded here: https://chromeenterprise.google/browser/download/
Even better, spin up a little VM or VPS somewhere in the cloud, install 'unbound' as a recursive resolver and point it to your nextdns.io account/address.
Let's unpack this ... backwards ...
DNS servers out on the Internet are queried by nextdns, which presumably has no PII from you other than your CC number[1] and zip code.
Nextdns receives nothing but queries from some random VPS/EC2/VM IP. Again, presumably a provider that knows (almost) nothing about you.
Your ISP sees nothing ... just encrypted DNS traffic.
It's win, win, win.
You see no ads, since nextcloud.io acts like a pihole and strips/blocks all of the malicious hostname lookups.
[1] Remember, only AMEX verifies cardholder FIRST LAST. Use your VISA/MC. I think my first/last is Nextdns User or whatever ... YMMV if a merchant is enrolled in that weird "verified by visa" service ...
* Type/paste a URL
* Type/paste a search
* Search my browser history (usually to jump to a previous URL)
* Use search engine keywords to do direct searches on some applications I use regularly (eg, "jira P-123" does a search in JIRA directly, which happens to jump to that ticket directly)
Browsers that separate those two drive me a bit crazy, because of the extra thinking required before typing.
(I don't really like this whole "using the first search term as DNS lookup" but that's separate from the UX of single vs separate inputs.)
CTRL-L: focus URL bar, typed text will be navigated to or searched for
CTRL-K: focus URL bar, typed text will be searched for
(same in Firefox, with the distinction that Firefox has two UI elements instead of one)
That’s obviously so, because that’s it’s entire raison d’entre.
The problem is that some ISPs have configured their DNS resolvers to lie and not return NXDOMAIN. Instead redirecting you to some website for marketing purposes. The Chromium workaround is to try and detect if it is using a lying DNS resolver by issuing queries that it knows SHOULD return an NXDOMAIN.
If this concerns you run your own resolver, enable DNSSEC validation, and enable aggressive NSEC caching(RFC 8198).
The question is: does Chromium send the first word I type to my ISP?
The answer appears to be: yes.
I am getting close to moving to a hut in the woods and forgetting all about the internet.
If your ISP forges NXDOMAIN responses, the correct response is to DOH to a provider that doesn't do that. That's a simple networking config change, for which there is UI in every mainstream operating system. The DNSSEC part of this conversation is just silly.
"Buy us out and we'll stop, and you can use the tech on your customers?!?"
One of the boldest business proposals I've been party to. After a few deep breaths and some laughter, the offer was not taken. But that wasn't a one-off event. Spent a lot of time in early 2010's directly trying to protect customers from this stuff. Still do, but it's getting much harder with TLS-everywhere, HSTS, DOH, and many other things. Not impossible though, we can never let up on the pressure to keep the ROI too low for hijacking. The various network operators and ISPs that let these companies put racks in their data-centers to inspect user traffic should be <<insert_your_own_horrible_idea_here>>.
I struggle to understand how DNS can possibly be a performance issue in 2020. In most corporate environments, the "working set" of a typical DNS servers will fit in the L3 cache of the CPU, or even the L2 cache.
The amount of network traffic involved is similarly miniscule. If all 200K of your client machines sent 100 requests per second, each 100 bytes in size, all of those to just one server, that adds up to a paltry 2 Gbps.
If your DNS servers are struggling with that, get better servers. Or networks. Or IT engineers.
Not saying that we should embark on some quest for retribution against Google. It's just sad.
My solution was to assign the Pihole the IP address 8.8.8.8 as well. Then I added a static route in at the router to route 8.8.8.8 to the Pihole. Now every request to dns.google will also be handled by pihole instead of getting timeouts.
nice that you already debunked your thesis
I've got several rules for Google Chrome in Little Snitch that seem to do the trick. Deny outgoing UDP connections, and Deny outgoing TCP connections to port 80 for the IP addresses and domain for my ISP. You can see these if you monitor traffic.
I think the only downside is that you would leak some information about your system clock.
That would still allow ISPs to compute the limited number of domains for which NXDOMAIN would need to be sent at any given point in time.
(Whether they'd do it is another story. The random pattern currently used by Chrome looks like it may still be easily detectable at the DNS-recursor level, so maybe the ISPs really don't bother beyond the simple NXDOMAIN -> portal domain replacement.)
Select a root server at the bottom. Some, but not all, have a "statistics" link. Seems to be stated in qps and message size distribution, but you should be able to derive traffic volume from that.
https://github.com/rssac-caucus/RSSAC002-data
You can also click on the "RSSAC" button on root-servers.org to get the YAML straight from the root server operators themselves.
Most of the root server operators have anycast instances deployed in organizations that host the servers for them. So there's not an easy way to measure bandwidth utilization because many root server anycast instances are hosted in organizations that may not, or could not, report that bandwidth utilization. Look at the map on root-servers.org to see how dispersed around the world these things are.
[1] https://github.com/Eloston/ungoogled-chromium/blob/14fb2b0/p...
[2] https://github.com/bromite/bromite/blob/410fc50/build/patche...
Note that under this crude test of sending queries for unregistered domains, a user who administers their own DNS could be indistingushiable from "DNS interception" by an ISP or other third party.
I administer my own DNS. I do not use third party DNS. These random queries would just hit my own DNS servers, not the root servers.
> Users on such networks might be shown the “did you mean” infobar on every single-term search. To work around this, Chromium needs to know if it can trust the network to provide non-intercepted DNS responses.
Don't know if this is the sole reason.
Reminds me of the story behind "Google Public DNS". Back in 2008/2009, OpenDNS was hijacking "queries" (NXDOMAIN) typed in the address bar to their own search page ("OpenDNS Guide", or some such) on an opendns.com subdomain. In response, Google launched its own open resolver.^1 (OpenDNS was later acquired by Cisco)
Guess what Google' priorities were when they approached that problem.
If I serve fake responses does that turn off searching via the address bar?
Why doesn't Chromium just have a setting that allows a user to turn off the incessant queries for nonexistant names.
No matter how high-profile the environment, eventually, the rubber will hit the road and some human will be in a privileged position to be able to fix a problem.
That is true for every single service out there. Yes. Including Gmail. Including AWS. Including Twitter. Everywhere.
Depending on size and profile of the service it's more or less people in need of jumping through more or less hoops to get there, but this must be true for any service.
Always keep this in mind when you make the decision to move your data to a cloud service.
It's about time for DNSSEC to be available on all TLDs and for browsers to nag if it is broken.
What's crazy about this is that there's a trivial solution to forged NXDOMAIN responses that people can adopt immediately: just DoH to a provider that doesn't forge NXDOMAIN responses (none of the major providers do).
I sometimes wonder whether the vehemence of the anti-DoH advocacy is rooted in concern that it will cause DNSSEC to lose yet another potential motivating use case.
Think about a coordinated effort by top tier DNS providers globally to stop a giant bot network by simultaneously 'hijacking' DNS responses for the command and control server host-names. In classic DNS this is easy, just intercept the requests at the LDNS provider and return a dummy server IP, all good.
That falls apart with DOH and DNSSEC. With DNSSEC you cannot forge a response to a client that strictly expects signed responses for a particular zone. And with DOH, the various corporate IT shops cannot inspect and 'hijack' the responses. Though, the DOH operator can still change the response. But that moves the capability outside of local corporate IT and into a multinational company that might not agree with your request to 'fix' a problem via assisted DNS hijacking.
So all of these new, safer DNS delivery methods do legitimately impact the ability of "good"* operators to protect the Internet. Is the trade off worth it to protect users DNS traffic versus being able to respond to threats? I think that protecting users daily traffic is net-net better as it is a steady state problem and state sponsored actors have the resources to subvert a population via DNS. But I also feel the loss of a tool to protect users at the same time. Things like this are never zero-sum.
Disclaimer: I work for Microsoft and although I don't operate DNS services as part of my job, I have spent a lot of time on this particular topic over the years. These are my opinions, not the companies. I welcome challenges to my opinions, that's how I learn.
*"good" is always a situational thing.
I believe the purpose of this feature is not about detecting hijack requests to valid top level domains. In other words, a well written NXDOMAIN interceptor would not cause a harm to their intended audience, so they didn't bother trying to detect it.
It's about detecting that a "eng-wiki A aa.bb.cc.dd" record it just received from the user's DNS server is actually intended to be eng-wiki served from corp network instead of a stupid ISP page.
Please explain why you hate DNSSEC instead of downvoting things you disagree with.
To me, it's as if DNSSEC has some critical and unfixable security vulnerability, and people who make these decisions decided to stop all work on it, but not reveal the vulnerability because doing so would do too much damage.
This is probably the most comprehensive list of reasons not to use it: https://www.imperialviolet.org/2015/01/17/notdane.html
Maybe this "omnibox" doesn't know whether I want to enter a hostname or a search term, but I do.
Chrome's implementation is terrible as it's designed to just funnel you into Google search, and doesn't give you a good idea of how useful it can be.
If I want to search in just my history, I can simply press CTRL-H and type to search in history.
Update: I hide some site of mine from google, I do not care about SE traffic and block any crawler, and I have noticed when I give some people I know a URL from that site, they often tell me that it cannot be found, given google search they use all the time does not list it
If the less tech savvy can't figure out that it does both, then maybe it isn't as intuitive as the browser builders think.
This is not relevant for URL or search bars since they need to be displayed horizontally. Separate bars means less vertical screen space, which is still scarce.
In firefox preferences there still is an option to switch to old layout with two bars. Sadly it does exactly the same thing as adding search bar manually, that is gives you second redundant search bar and does not turn off search support in the original url bar.
(I think the omnibox is the right UI though)
Need? Has anyone tried?