Sort of like how CloudFlare does with their "Flexible SSL". As an end user, I have no way of knowing if CloudFlare is proxying my credit card information over clear-text to an insecure origin server.
It made a bit more sense in 2014 when there were more barriers to getting a real cert for your personal blog / forum / whatever - the cost of the cert itself, hosting companies charging for a dedicated IP (because they hadn't gotten the memo on SNI), or the maintenance burden of manually renewing if you ran your own VM.
But Let's Encrypt makes it trivial to auto-provision a real certificate, and many (if not most) hosts support setting it up through their control panels. The HTTP-01 challenge (which is now the default) works fine behind Cloudflare.
If you don't want to (or can't) use Let's Encrypt, Cloudflare themselves offer certificates from a private CA that you can install on your origin. These certs are trusted by their proxies and can last a lot longer than publicly-trusted certs (10+ years I believe), so it's a good option if you're stuck with a server setup that makes you manually upload cert files.
There's just no good reason to proxy HTTPS traffic over HTTP anymore (if there ever was). Enabling it by default is encouraging awful security practices.
I'm a big fan of end-to-end encryption but I think a statement that broad should include a threat model. Not everyone is saving user credentials, credit card numbers, etc. and if you're primarily concerned about someone hijaacking the local network or untargeted national snooping, having HTTPS between the user and CloudFlare is a really big improvement because far more tampering happens at the edge rather than between the datacenter server serving your content and CloudFlare's network.
I do agree that this should be less and less acceptable as so much of the infrastructure has improved but there are still things like personal blogs and other content sites where you mostly don't want things like hostile ISPs injecting ads or malware onto your pages. That might make a good transition for Flexible SSL — start rebadging the UI to increasingly strongly emphasize that it's not suitable for sites with logins, PII, etc.
Cloudflare should really message if this is the case when using their gateway. Small UI changes to note this would likely go a long way toward coercing better overall security.
When I use Cloudflare as a proxy, I also configure authenticated origin pulls[1] for better endpoint hardening. This makes it a bit more difficult to find a way to bypass the CF proxy, since hunting around on shodan etc. to find the server in the IPv4 space echoing the same content will not work.
[1] https://blog.cloudflare.com/protecting-the-origin-with-tls-a...
I've always hoped that Cloudflare would add a HTTP header indicating the backend encryption status. I filed this issue back in 2015: https://github.com/cloudflare/claire/issues/17
In fact, Nick Sullivan, the Head of Cryptography at Cloudflare, stated a few years ago: "CloudFlare would be very happy to be able to indicate to the user the nature of how data is encrypted beyond the IP you are connecting to. Unfortunately there is no way to do that yet in modern browsers. Soon we will be sending an additional header down to the browser to indicate if a site is using strict SSL, it will be up to the browser to display it." However, as far as I can tell, this has not been implemented.
https://blog.cloudflare.com/introducing-strict-ssl-protectin...
https://medium.com/@ss23/leveraging-cloudflares-authenticate...
(Could have been fixed in the past couple months, but I doubt it.)
Same for Access, by the way.
If I was a group who needed to get eyes on TLS traffic without it looking too suspicious, offering free reverse-proxy services would be the way to go (for attack protection and CDN-like features, of course).
Very few companies run their own servers in their own datacenters these days. They trust their vendors, which you have to do. Even then, they most likely use certs granted by a third party, who could easily grant the cert to someone else, too, and allow your traffic to be snooped.
Why do you single out Cloudflare and not those other service providers?
that's a pretty over the top accusation to make without citing evidence
I don't think the poster needed to allege that cloudflare offers free reverse-proxy services for diabolocal ends. The fact remains that they are a vulnerability so perfectly constructed that you couldn't do better intentionally.
Any major intelligence agency that isn't (/hasn't been) investing heavily in infiltrating cloudflare is incompetent.
Infiltrating CF is far, far easier than any of the other TLS-snooping methods (breaking the encryption, generating a fake cert via bad CA and intercepting, etc); it's not ridiculous to think the bogeyman-du-jour probably has fingers in CF (with their knowlege or not, doesn't really matter), and it'd be irresponsible to assume that TLS traffic going through CF is any more secure plaintext.
We now log whether HTTP2 or HTTP1.1 is used by the browser by using JavaScript: `window.performance.getEntries()[0].nextHopProtocol` which is supported by most modern browsers.
This works because we use CloudFlare, so most of our users get HTTP2, unless they are using a corporate proxiy, which often downgrade the browser connection to HTTP1.1. e.g. Cisco WSA doesn't support HTTP/2 yet[1].
We also log response headers on XMLHTTPRequests that fail, because sometimes the proxy inserts a header with its name and version (however headers sometimes get stripped for security reasons by the browser e.g. CORS, and timeouts usually have no response header).
1. https://quickview.cloudapps.cisco.com/quickview/bug/CSCuv329...
The ability to degrade encryption cipher suites and inability of most of these boxes to invalidate certificates results in lower security for most users. I have seen sites with expired certs be passed to users since the interception replaces the site's cert with the root cert. This means the browser ends up trusting this cert and showing content that would normally be blocked. This is an interesting mess we have gotten ourselves into. Also interesting when taken in light of the BITS/ Andrew Kennedy comments on TLS 1.3 that directly impacts this ability.
0 - https://bugs.chromium.org/p/chromium/issues/detail?id=628819
These settings should persist through browser upgrades too.
That is: Clients don't get to decide about encryption only servers do.
And partially, this makes technical sense. There are fewer servers, and the chance that they get it right is a lot higher. On the other hand, this is nothing more than the platforms pulling all power towards themselves. Getting users used to the paradigm 'we will decide what kind of encryption you get'.
I think browsers are way too friendly to this practice. IT departments & oppressive governments are the main culprits obviously, but the browser and the TLS impl is supposed to be on the user's side.
I MITM my network so I can filter out ads and other crap, inject custom stylesheets, and otherwise modify pages so that I can maintain a sane browsing experience even on devices with severely castrated browsers. Need to control JS on something that can't even let you turn it off? What better than stripping out the <script> tags completely before it even gets there. Want to see the full version of the page instead of some mobile portal? I can change the user agent and other headers on-the-fly. I can also check if something is phoning home, and what exactly its communication is:
https://news.ycombinator.com/item?id=6759426
Given the situation with IoT and other "smart" things these days, along with the trend of walled garden ecosystems and HTTPS Everywhere (even for DNS!), I would almost consider an HTTPS intercepting proxy essential for security and privacy purposes. Funny that the article makes no mention of this, but only the usual "evil corporate proxies" scaremongering... then again, it wouldn't fit in their narrative. Proxomitron, Proxydomo, Proxymodo(!), Adsubtract, Admuncher, and the list goes on. These were quite popular a decade ago, and would've remained so had the "security-cult" not driven them into obscurity.
This feels like just another one of those "we want to ensure we force all our content down your throat and make you powerless to stop it" schemes, and I'm pretty confident that I'm already seeing it in action. The previous technique was running JS on the page to detect modifications (including those produced by adblockers), now they're moving that war deeper.
edit: Wow, downvoted already.
tl;dr: My network, my traffic. Piss off with your nannying!!!
There are ways around this - the detection seems to work by investigating what TLS ciphers are supported, and comparing with what the username should do.
A MITM proxy could easily implement this. On the flip side cloudfare could easilly get false positives for people with non-default settings (which I suspect is measured in the <0.0001% range, so websites won't really care)
These are the default firefox cipher settings on Firefox 65
And here's my desktop's current settings
(which disabled ciphers without and dh key exchange - I also block TLS 1.0)
Frankly, I see a case for the use of both. Let us take the example of a bank. For accessing your accounts on the bank's website, it would be a very good thing if they could detect that that even though you landed on their login page via TLS, something has intercepted the connection and downgraded you from TLS 1.2 using ChaCha20-Poly1305 to TLS1 using 3DES- MD5 and throw up a warning that your encrypted connection may have been intercepted and your password and accounts might be at risk if you continue to sign in. It doesn't matter whether that TLS intercept is happening on a coffee shop wifi that's been popped by the kid in the apartment across the street with a cantenna, or the SuperSecure(TM) feature of Grandma's new Comcast cable modem. I sure as hell would want to know that someone has been monkeying maliciously or incompetently with my encryption before signing into a page from which my life savings and investments can be controlled.
But on the other hand, let us take the case of individual workstations within that bank's internal network. A call center operator is going to be signing into various accounts and have at least some degree of access to sensitive data, like account numbers, balances, PII, etc. All the data breeches in the news point out how important it is to keep this data from leaking, and it is absolutely in that bank's security department to be able to monitor and control what data is entering and leaving those workstations, including on encrypted channels. Again, it doesn't matter whether that's blocking inbound malvertisements from lunchtime Facebook browsing, or a clever outbound data exfiltration channel.
On the gripping hand, I recall one of the security folks at Google commenting on their bugtracker that local anti-virus' TLS intercept was one of their biggest impediments to securing the browser. And the bank's internal TLS intercept in the above scenario does make for a high-priority target for an attacker, and is potentially a Game-Over-class single point of failure were it popped.
I'm personally of the opinion that TLS interception is a bad idea in most cases, and making it less common is a net win for overall privacy and security for all involved. But, it is a tool, and can have valid uses.
What happened to man-in-the-middle?
First time i heard this and i already prefer it to man-in-the-middle as it sounds funnier :-).