Out of curiosity: Why, if you generally trust your ISP? Do you get worse performance using their DNS servers?
Speaking of stats, I can also see what IoT/Cell devices are requesting to keep an eye on their behavior and look for interesting patterns of DNS requests.
I have honestly never used the ISP DNS servers so I don't know what their performance is like. It's just muscle memory for me to set up my own home Linux router to be a DNS server. I highly doubt they could top the performance of Unbound and cron-jobs that request commonly used records on an hourly basis. I do know that my performance is better than talking to the DoH/DoT servers on the internet. The cached record response time is in microseconds vs 23ms for CF and non cached response time is generally between 50ms and 70ms vs 80ms to 160ms for CF not-cached.
Another nifty option in Unbound is to cache the "Infrastructure" records and to "Keep Probing" multiple nodes. This combines into a nice balance of speed and resilience especially if someones name server is having a moment but their status page is green.
unbound-control dump_infra|wc -l
1235
These numbers are thrown off a bit by my cron jobs that are requesting things that I am not visiting all the time and when the authoritative record is sub 3600 seconds. They are requested hourly. Some of the government domains in my cron job seem to be throwing off the curve, I will reach out to them. total.num.cachehits=20949
total.num.cachemiss=8010
total.num.prefetch=753
total.recursion.time.median=0.0698958Yeah, local caching is a good point if your operating system(s) doesn't already do it in the DNS client.
> This also gives me the option to block domain names used for dark patterns or outright malevolent behavior.
I wonder how long this will actually remain possible, given that with DoH it now seems entirely feasible for websites to provide their own application-level DNS resolver?
You're quite welcome!
I wonder how long this will actually remain possible, given that with DoH it now seems entirely feasible for websites to provide their own application-level DNS resolver?
For me, forever. Applications can not bypass my DNS unless they are hard coding IP addresses in the application. Windows Update does have some hard coded IP addresses it can fall back on.
It is often said that DoH can't be blocked because in theory it can be hosted on any generic CDN IP pool but to my knowledge this has never been the case. It's quite the opposite, most DoH/DoT providers try to use vanity IP addresses. I null route them and NXDOMAIN the canary domain use-application-dns.net which is entirely optional but a nice gesture to applications to behave. Some vendor may decide one day to host their own DoH/DoT servers but I suspect I would learn about them. I would likely just avoid buying/using that device/application.
Perhaps some day a DoH provider may be so bold as to use a generic CDN pool and I will have to address that issue when it arises. I suspect this would be more challenging for the provider as the app/device will need a way to discover this pool DNS name, HTTP headers, API calls, etc... unless they hard code IP's. Either way I could dynamically null route them.
Is there some other option where you talk directly to the top level DNS root and the nameservers directly??
Edit: my bad you said you talk to the root servers directly. Not sure how to delete comments
It's still a valid question. You are right, one has to bootstrap the root servers. There are a few ways to do this. Assuming one had working DNS server at some point in the past they can
dig @e.root-servers.net +nocookie +tries=4 +retry=4 +time=8 . ns | grep -Ev "^;|^$" > /etc/unbound/named_hints.tmp # sanity check this
and then do sanity checks on the output prior to loading it as hints in Unbound DNS. The 3K or so root servers are Anycast IP addresses and rarely change so this file will not be stagnant for a very long time thus making thumb drives a valid way to store and transfer this file.