So I assume that someone has to be on your network to use a tool like Charles Proxy. Is the combination of securing your network plus HTTPS "enough" security to keep web and app traffic safe? What are other aspects I'm missing? Would love to learn more if anyone is willing to share some good resources. TIA.
Protection from MITM with HTTPS and TLS in general relies on certificate validation (or exotic key setup). Commonly used browsers do a good job (baring whatever security issues are found from time to time), but apps are mixed.
Sometimes they accept any certificate, from any issuer, including self-signed certificates. Sometimes, the certificate needs to match the domain, but any issuer is fine, including self-signed. Sometimes, the certificate needs to match the domain and be issued by a widely accepted CA. Sometimes, the certificate needs to be issued by one of a small list of issuers, but any domain is fine. Sometimes, the certificate needs a matching domain and be from a small list of issuers.
Also, not all apps check certificate expiration. There are a lot of ways to do it wrong here, so the app says https or uses port 443 or even wireshark shows TLS doesn't tell you much.
So someone outside your network and the destination server cannot read your HTTPS calls at all. Someone inside your network could read your traffic if they tricked your browser into talking through a proxy instead of talking directly out across the internet. This is why you should never use captive portals / hotel WiFi that replaces SSL certificates with their own, because they're acting as a proxy for your traffic.
But in broad terms, if you are connecting directly to an external site without use of a proxy, HTTPS is secure end to end.
Here is one way using openssl:
for i in $(echo -e "news.ycombinator.com ycombinator.com www.ycombinator.com"); do echo -en "${i}: "; openssl s_client -servername "${i}" -connect "${i}":443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin; done|sort -k2 -t"=" | awk '{print $NF "\t" $1}' | column -t
Fingerprint=22:05:8D:96:A0:F7:9B:8F:B8:1D:0F:74:EC:4B:76:8F:84:B0:42:49 www.ycombinator.com:
Fingerprint=5D:70:F0:DC:E0:AF:67:A0:8F:BC:2F:B8:49:F0:79:5D:8B:FF:49:93 news.ycombinator.com:
Fingerprint=C4:A6:FF:38:83:13:31:DC:14:01:3D:05:E8:3B:29:95:FD:AE:9B:0E ycombinator.com:
One could diff the output in a script then send an alert if there is a diff from one test to the other. If doing this factor in expiration and test from multiple locations. Meaning you can expect the cert to change some time before it expires, hopefully. openssl s_client -servername news.ycombinator.com -connect news.ycombinator.com:443 < /dev/null 2>/dev/null | openssl x509 -noout -dates
notBefore=Sep 7 00:00:00 2021 GMT
notAfter=Oct 8 23:59:59 2022 GMT
Another mitigating control is public key pinning but very few organizations do this any more. It is just too risky operationally. Another method is to limit what CA's you trust but this is not practical for most organizations. e.g. You have a specific purpose system that only talks to Specified_Bank and that Specified_Bank only uses Specified_CA, you can strip out all other CA's from your trust store and/or manually pin their public key in your system accepting the risk connections will break when they update their cert without coordinating with you.replacing certificates requires control of either machine, in which case you'd have bigger problems.
then there's the question of how secure is the encryption, which is a rabbit hole of it's own: key size, random generator effectiveness, correct padding, correct implementation or even which layer eg. HTTPS still has exposed SNI, and it goes on and on...
theoretically there could be flaws in any component, including even stolen root certificates, or super computers, even quantum computers. it comes down to price of the safe vs. value inside.