They once blocked workers.dev (Cloudflare Workers) wholesale[1], resulting in a huge flood of issue reports for a few FOSS services of mine. Guess they've never heard of public suffixes.
This one appears to be someone reading about DNS rebinding attack somewhere, then pulling the trigger without understanding it. Or maybe I even overestimated them, DNS rebinding only came up as a justification very deep into the discussion.
To make matters worse, clients using these block lists have update frequencies all over the place, so you can never be sure when your stuff gets unborked for all your users even after they revert changes like this.
> Is a security issue, imagine if you're running a webserver on a site decided to access it from outside, whether to fingerprint you or act nefariously. There should be no reason why a third-party access localhost. But do tell me, why we should we trust sites accessing localhost.
It makes no sense to me. Unless someone knows of a better reason, I'm of the opinion that this change should be reverted.
Funny story though, I used to park wildcard sub-domains on 127.0.0.1 just to keep the bots off the load balancers and a customer said that we were running a vulnerable version of PHP. I said we had no installations of PHP anywhere in production. Turned out they were scanning one of my parked wildcard sub-domains and effectively were scanning their own laptop which had some old PHP web app running on it. That also told me they were also not validating certs.
that sounds like a good practice -- why is this not done more often I wonder.
EDIT: on a second thought i am not so sure. I am not an expert here so I will not try to guess :)
> Stupid example of why it may matter: say you installed LAMP on your computer several years ago, you're not using PHP frequently, and you haven't kept it up to date (so it probably contains a few nasty security vulnerabilities), but it still opens up on boot and listens on localhost.
> Now you open some website, it accesses 127.0.0.1, check for the LAMP vulnerability and exploit it if found. Congratulation, you have been pwn3d.
Interesting but very rare scenario?
The logic seems ok for "out in the wild" cases - but ublock still let's you override stuff if you know what you're doing.
We got a bug report from someone in the company that the share icon was missing, and after investigating we saw the other icons (for edit and delete) were visible but not sharing. Long story short, they used an adblocker with a setting to block social media sharing links (Facebook likes, Twitter follows, etc) and it was also removing our icon.
In this instance, maybe it'd be fine to turn it off for localhost and keep it on for staging but still...
Plus genuinely the privacy list is good. Just not when it stops the ability to build good.
https://wicg.github.io/private-network-access/
https://developer.chrome.com/blog/private-network-access-upd...
Also ws/wss has no SOP/CORS which could be a problem, but that has nothing do with the domain blocking here.
Should websites be allowed to resolve airprint or airdrop based devices, given the history of CSRF vulnerabilities in consumer routers? Probably not.
Devs seem to confuse that most humans are not developers, and therefore easylist's decision to do so has that kind of context.
The point of those lists is to block away access to local domains so a malicious website that got through the filters isn't able to pwn your whole network.
And if we are discussing whether or not websites should be allowed to access the local network, then you are probably someone who doesn't give a damn about securing those devices anyways.
Once I had to apply a firmware update to a device (don't remember what it was). I had to install some vendor's software but surprisingly the instructions said to then visit a public URL like fwupdate.vendorswebsite.com, which indeed applied the update to my physically connected device.
I dug into it and turns out the software launches a local web server listening on localhost which exposes an API that the website accesses over plain CORS HTTP to localhost. This web server talks to the device connected over USB.
This felt like an egregious breach of privacy--public websites should not be allowed to arbitrarily exchange data with locally-bound servers. Even though this was the intended design of the firmware update process, my browser really should not have let this occur by default without my explicit opt-in.
> Is a security issue, imagine if you're running a webserver on a site decided to access it from outside, whether to fingerprint you or act nefariously. There should be no reason why a third-party access localhost. But do tell me, why we should we trust sites accessing localhost.
That web server would need to be configured for CSRF and CORS of that specific domain as well. If this were an attacker then it wouldn't take long to seize that domain.
To fully extrapolate that, the server would only be accessible by the users machine. There's no implication of "third party access". Maybe if they were demanding the website to have a higher classification of verification for their certificate I'd understand, but frankly without an example of where and how this is a vector I'm skeptical.
The reasoning they give makes no sense; their style of writing also doesn’t match previous commits that they’ve made. Maybe looking too deep into this, but this commit makes absolutely 0 sense and should be reverted.
From the commit set (e.g., [0]), it looks like he was expanding EasyList's blocking of sites that use 127.0.0.1 DNS records to carry out DNS rebinding attacks and fingerprinting, and overlooked this legitimate use case for such records.
Legitimate, that is, as long as all of the domain owners are trusted, because this does open up opportunities for conten served from those domains to punch through the same-origin policy and read back data served from 127.0.0.1. This can be a security hole, e.g., I've seen browser extensions in-the-wild which jury-rig IPC to an external helper process by opening up an HTTP API on a local port.
[0] https://github.com/easylist/easylist/commit/f11ee956a6e585d8...
I'm not sure why this applies to first party browsing, though. In its current form (https://github.com/easylist/easylist/blob/master/easyprivacy...) several of these domains got the $third_party modifier which should make CORS fail, and that should resolve most of the fingerprinting risk. I'm not sure why this isn't the default to be honest.
That said, if you're developing software you should probably be running without any addons like uBlock enabled to prevent surprises in production for your non-uBlock users. Besides that, you can't get HTTPS for these domains (without the mess of a custom CA and even then you'll run into CT issues) so development doesn't even reflect real life deployments. Secure origins matter!
Lastly, you can't be sure any of these domains won't eventually resolve to a real IP address somewhere down the line, unless you own them. They're very useful but also very out of your control and that makes them a potential security risk.
The workaround should be obvious: add an entry to your hosts file (using either a TLD you own or the proper reserved TLDs (.test, .example, .localhost, .invalid, .home.arpa, and maybe .local though that can conflict with mDNS).
If you're using Chrome, you can probably use .localhost already, as it resolves those to your local domains for you. Still, adding a *.localhost to your hosts file will ensure that things actually work as intended.
What are you talking about? Certificate insurance and DNS A/AAAA records are entirely decoupled. Use the ACME dns-01 challenge to get a cert for domains resolving to anything, including 127.0.0.1 or ::1. Alternatively you can even use http-01 or other challenges to get a wildcard cert, with subdomains pointing to localhost. I use Lets Encrypt certs for localhost and LAN every day.
Edit: a little more precision.
You can set up a localhost redirect on your own domain no problem, and you can even use a local DNS server to make sure nobody can abuse your localhost redirect so your domain doesn't get filtered out by tools like these.
However, I assume someone using fbi.com because it happens to redirect to localhost doesn't own a domain (or can't be bothered to set up a redirect of their own).
It seems to me there's a higher risk that uBlock blocks something and breaks something than uBlock making something work that wouldn't for people not having it. I once had a filter block something called /share/ or share.js, fortunately I noticed during the development. I definitely prefer having it enabled while developing.
> Besides that, you can't get HTTPS for these domains (without the mess of a custom CA and even then you'll run into CT issues)
Indeed. I recently had to do this and found mkcert [1] which makes it very easy to do. But it's overkill for most situations.
Of course you can, you just cant use HTTP validation for it. Use DNS validation and it works fine.
I own a domain and internally use local.domain.com for all internal sites. Wildcard and specific names.
I can generate certs using ACME/LetsEncrypt.
So, everything, including test sites could be on that domain.
For reference, I use PiHole and OpnSense, and internally machines in DHCP and static IPs get local.mydomain.com resolution too.
In this specific case, it's about, a bunch of generic domains set up by other people.
In your pihole example the situation would be even better because you don't need to publish A records for the domains anywhere. That means nobody can abuse your domain for fingerprinting workarounds but you still maintain complete control.
If you know the IP you want to block you should just block on the IP instead of chasing down every possible address even though addresses can change at any time.