You'd have to mask every informational API with a suitable corrupted alternative that is plausible.
It shouldn't be confusing because its really fairly simple.
The gist is this... so long as determinism as a systems property holds true in a system, you can leak information by the absence of something when compared to another expected thing. This is how inference works in many respects, you have properties and you can deduce or infer from whether those are present additional information that is not necessarily given.
Most gifted problem solvers probably couldn't tell you that is what they are doing because its unconscious often a result of years of observation.
Computers fundamentally require certain system properties such as determinism to do work in the first place, and you can inject non-determinism into those processes in ways that won't break underlying subsystems as long as its within an expected range and that can make it indeterminate. An indeterminate fingerprint is useless.
In the case of outright blocking the code from running, you leak information that you are blocking it by preventing it from running since they expect a range of values back and a null (the response when nothing gets sent back) is itself a value (or state if you want to be technical).
The site then only has to test for this semi-unique case (i.e a null represents a single group of people who are blocking it) and then prevent the site from responding to you. They are the gatekeepers because they control their infrastructure.
Incidentally this is how almost all the Are you human tests work. It collects an intrusive fingerprint, and if its within a range that corresponds to a known user spectrum of values then it allows you to continue to the site.
This is a rough boiled down explanation, it can get quite a bit more abstract and technical when talking about whether determinism is present (i.e. how do you test for it).
Ultimately, if you can understand determinism, you fundamentally understand the limits of computation and how computers work at a barebones level. It also gives you the ability to find whether certain types of problems are impossible (and thus you don't waste your time on them, or unproductive avenues).
In principle you could build a browser that - to a good approximation, you will, for example, realistically not be able to prevent timing leaks - does not leak any information about the system it is running on, just isolate it from the host system. With more effort you could let details about the host system leak into the browser but not out into the network. Well, some information has to flow from the host system into the browser and out over the network, for example key strokes and mouse movements, otherwise you could not do much with this browser.
So you could still try to fingerprint users by their choice of words, typing patterns and things like that, maybe made a bit harder by filtering out high frequency timing information. But at least one could no longer simply fingerprint the host system. Which would again be a trade-off, your screen resolution is not only good for a few bits of a fingerprint but can also be legitimately used to serve content at a proper resolution.