> Contrary to widespread belief, public key pinning [19] — an HTTPS feature that allows websites to restrict connections to a specific key — does not prevent this interception. Chrome, Firefox, and Safari only enforce pinned keys when a certificate chain terminates in an authority shipped with the browser or operating system. The extra validation is skipped when the chain terminates in a locally installed root (i.e., a CA certificate installed by an administrator).
Seems like a strange default to me. I feel like the user should be notified of this, for instance if they're using a work computer to access their bank account or something like that.
That's not to say I disagree with the sentiment that this is something employers (and other organizations providing access to devices) should be obliged to disclose, but that is perhaps more of a legal and educational issue.
Hah. That's precisely the argument I have made when arguing that there should be an opt-out for addon signature verification (needing admin permissions to toggle it if they insist) because you already utterly lost the security game if someone had admin on the machine.
But no, they argue that they must defend against malware with admin permissions injecting addons into the browser. Because that's a fight worth fighting and the perception of the browser's security is somehow more important than user freedom.
Employees are often required to install local certs (or applications/scripts that do that) - that doesn't mean the host is entirely compromised.
If it involves patching and recompiling the browser it wouldn't be that trivial for your average sysadmin. Besides I don't see why the admin would be hostile to the users being aware that they're being monitored. As you point out companies generally disclose that anyway.
Had public key pinning existed before companies started thinking of intercepting their internal TLS traffic, that wouldn't be an issue. But since TLS interception was already unfortunately common when public key pinning was developed, not having that exception would mean that nobody could ever use it, since it would mean instantly breaking whichever site enabled it.
And in a certain way, users are already being warned, as long as their bank uses an EV certificate: the EV green bar won't appear, since EV certificates are only allowed from a hardcoded list of CAs (unfortunately, according to https://www.grc.com/ssl/ev.htm that's not the case for Internet Explorer).
That's the point...
Chris Palmer also wrote a really great blog post about this: https://noncombatant.org/2015/11/24/what-is-hpkp-for/
This is one reason why squid has a bump splice feature. Look for the SNI in Wireshark then config squid to let Netflix packets go through untouched. I am not aware of any other way to get Netflix and Snapchat to work in a MITM network. My experience is that you have to create a exception and not intercept. Not 100% sure. YMMV.
This still should not be the default, rather corps should have an easy about:config switch they can flip. The default should protect private users.
Meanwhile, of course, if someone did install a third-party root cert on my phone somehow, I'd never know because I always ignore & dismiss the wolf-crying warning.
The fundamental reason I disagree with you is that a computer should do exactly what its administrator wants it to do. If I install a root cert, it should trust that cert exactly as much (if not more than) every other root cert in the world.
Allowing uninspected outbound traffic makes it trivial for an attacker to exfiltrate data, an employee to accidentally or purposely release data, etc.
The arguments around warning fatigue are specious. The exact same mechanism that currently sets the number of warnings you get due to a pinning failure stemming from a user added certificate to 0 could easily make it 1 instead, or be tied to a "don't show me these warnings again" checkbox. Experimentation and data could determine whether and to what degree this was effective, as is routinely done with related warnings changes that have far less potential upside. But when you bring up the possibility of solving the question with real data the argument morphs into pure philosophy.
The philosophical points are twofold: first, a claim already raised here that fighting local admins is pointless because they'll always win and you don't want to get into an arms race. I attribute this to the fact that browsers developed on poorly sandboxed desktop platforms where admins are de facto root and no intelligent statement of any kind can be made about limitations of their behavior. On those platforms this isn't a crazy approach (although its shoulder-shrugging fatalism is distasteful to me even there). Fortunately, those aren't the only platforms we have today: on systems like Android the expectation is that corporate admins act through narrow, carefully controlled channels and will have no powers beyond those. There, the platform wins arguments with admins pretty much all the time. The arms race was over before it began. Without the risk of escalation from admins, the only question is whether the user is properly aware of the consequences of having had an extra CA added to their trust store, and again I refer to the point I made above: this can be settled with data. Rather than bend over backwards to give admins the benefit of the doubt, let's gather actual data on the degree to which users are comfortable with this behavior. And if they aren't, well, then the admin is an adversary and we have a duty to protect the user.
When you make this argument however the discussion becomes /really/ philosophical: people will start saying that limiting admin powers is anti-user-freedom, despite the fact that the user of the device clearly has a greater ability to make decisions for themselves about their security than in the free-for-all common to platforms of yore. Why that matters in this discussion is beyond me: even if you subscribe to this belief the horse is out of the barn and no amount of smugly screwing users will fix that. And some will assert that admins are users too, and that we need to serve those markets well. But the fact that people will give you money does not mean you should take it: if the data gathered above indicates that users do not want their traffic intercepted then that, in my mind, should be final-- if the amount of money on the other side convinces members of the security community to hurt users then in my view we should just give up the pretense that we're the good guys.
Except it isn't. Even simple things like cloudflare's SSL termination allows the traffic to go unencrypted over the internet and be intercepted by 3rd parties.
http://www.theregister.co.uk/2016/07/14/cloudflare_investiga...
> We deployed these heuristics on three diverse networks:
> (1) Mozilla Firefox update servers,
> (2) a set of popular e-commerce sites, and
> (3) the Cloudflare content distribution network.
> In each case, we find more than an order of magnitude more interception than previously estimated, ranging from 4–11%.
> As a class, interception products drastically reduce connection security. Most concerningly, 62% of traffic that traverses a network middlebox has reduced security and 58% of middlebox connections have severe vulnerabilities. We investigated popular antivirus and corporate proxies, finding that nearly all reduce connection security and that many introduce vulnerabilities (e.g., fail to validate certificates).
This won't solve all use-cases, but selfishly, It will solve mine at DNSFilter: If a browser could recognize our SSL cert, or a special field in our cert, and present the user with a block message, and a static link to learn more, it would eliminate the need for us to have our customers install a CA of ours, and MITM traffic. We have not yet done so, and I'd prefer not to, but it seems to be the industry standard way of avoiding users being confused by errors when we block/MITM an SSL site.
I might open-source it if there's interest but it's relatively basic.
I have a little side project where I try to implement a proxy for myself. I want to remove ads and be able to scan and cache downloads. I trust the adblock plugins and endpoint security products far less than a MITM proxy, I wrote myself.
Maybe this will have to wait until after the team from this paper releases their fingerprints: https://github.com/zakird/tlsfingerprints
I hate the AV industry in infosec. It does not work well and in most cases, refundes security. Unbelievable that it's still required for a lot of compliance veers.
It's an IP routing concept. AS Numbers are used to refer to different networks (run by different ISPs and providers) on the internet.
If you've read that out of the paper you read a different one. Quote:
"Our grading scale focuses on the security of the TLS handshake and does not account for the additional HTTPS validation checks present in many browsers, such as HSTS, HPKP, OneCRL/CRLSets, certificate transparency validation, and OCSP must-staple. None of the products we tested supported these features."
Read: Some products got the absolute basics right. None of the solutions did anything that can reasonably be called "good".
> I expected much higher general standards.
I didn't. I don't expect anything from security appliance vendors.