2. There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message which will make them think there are security issued with your website.
3. Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".
4. The culture of fear around HTTPS, meaning only the "most secure" or "newest" protocols and cipher suites are to be used. This prevents older clients from working, where HTTP works just fine.
5. HTTPS is needlessly complex making it hard to implement. There have been several security vulnerabilities introduced simply by its use.
6. If you can't comply with the OpenSSL license, implementing it yourself is a hopeless endevour.
SSL was developed by corporations, for corporations. If you want some security feature to be applicable to the wider Internet, it needs to be community driven and community focused. Logging in to my server over SSH has far more security implications than accessing the website running on it over HTTPS. Yet, somehow, we managed to get SSH out there and accepted by the community without the need for Certificate Authorities.
Genuinely curious - what alternatives do you have in mind? Are there any WoT models that interest you more?
> There is currently only one free cert provider, if there are ever issues with it, your users will see a scary error message
Isn't this the point?
> Downloading and running code from a 4th, or 5th party and giving it access to your config files is not "more secure".
Could you elaborate? Have you written your whole stack from scratch? You are running millions of lines of code that you will never read but have been implemented by other parties.
> HTTPS is needlessly complex making it hard to implement.
Isn't this done with robust battle-tested libraries and built-in support in modern languages?
---
Mainly I'm just wondering why you're letting perfect be the enemy of good. There's always room for improvement in everything, but I don't think user privacy is a reasonable sacrifice to make.
> Giving in ends the hope that it will ever get changed.
Abstaining from HTTPS won't be seen by anyone as a protest, but as incompetency, whether you find that justifiable or not.
We don't have a robust understanding of who exactly operates PKI, but we do know that it's de facto governed by a company on Charleston Road, since CAs only have their root keys listed in things like web browsers at their pleasure. We also know that Charleston Road rewards CAs for their loyalty by red-zoning and down-ranking the folks who don't buy their products. Products which should ideally be deprecated, since SSL with PKI is much less secure.
Can anyone guess who's stymied progress in Internet security, by knuckle-dragging on DNSSEC interoperation? It reminds me of the days of Microsoft refusing to implement W3C standards. Shame on you, folks who work on Charleston Road and don't speak up. You can dominate the Internet all you like, but at least let it be free at its foundation.
> Isn't this the point?
The point is to secure the communication between client and server, and warn/stop it, if it is insecure (MITM et al.). It is counter-productive to stop the communication because an unrelated party (CA) is having issues.
This is important. I have several devices at home that cannot display many web sites because they don't have the ability to use latest ciphers.
If you dont get the difference in scale between the two you might have an issue understanding the real problem.
TLS is public key encryption... a 3rd party attesting to the provenance of public keys is inherent to its design.
HTTPS is cargo-cult'ish in this aspect. One obviously should not accept or serve personal data over HTTP, but why to encrypt public information? (Having said that I'm guilty here too as I blindly followed the instruction given to me by my hosting company and my plain open site redirects to HTTPS.)
If I want to quickly host my page and use encryption, then I have go through all that hustle to make it work. Perhaps allow use of self-signed certificates on same level as http instead of blocking my website.
On the other hand, a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.
It's fair enough to an argue that a self-signed cert could be an attack, but so could any http request.
> a no-cert (unencrypted) connection can be distinguished from an attack on an encrypted connection: the browser knows a priori (through the protocol in the URL) that the connection is supposed to be unencrypted.
I don't understand how that allows one to distinguish it from an attack. Knowing that a connection is supposed to be unencrypted is just equivalent to knowing that a connection could be under attack.
I manage 100+ servers, hosting a significantly larger number of domains, on a variety of linux and FreeBSD operating systems. Under both Apache & Nginx. "..all of that hustle.." to initially setup is under 2 minutes with LetsEncrypt. The renewal (via a cron job) is completely out-of-sight/out-of-mind.
The execution is shockingly simple. If you think it's "all that hassle" I guarantee you haven't even tried.
If you think something is set-it-and-forget-it, you haven't been around long enough.
Only if you're blindly running shell commands the effects of which you don't understand.
So, if you can't trust certificate (not when it is invalid), just show same level of protection as http.
Because it's taking time to build enough acceptance to flag http as insecure, whereas bad https connections that can't guarantee the expected security properties have been flagged as insecure from the beginning.
At this point, though, modern browsers show http sites as various flavors of "not secure" in the address bar, and limit what those sites can do. Browsers will increase the restrictions on insecure http over time, and hopefully get to the point where insecure http outside the local network gets treated much like bad https.
So like 3-5 minutes of work with Let's Encrypt?
Chrome's eventual goal is to mark all not-secure pages as not-secure: https://www.chromium.org/Home/chromium-security/marking-http...
"HTTP is known to cause cancer in the state of California"
Having the DNS credentials laying around on the server is not a good idea. So creating wildcard certs via letsencrypt is a huge pain in the ass.
If a webmaster has control over somedomain.com I think that is enough to assume he has control over *.somedomain.com. So I think letsencrypt should allow wildcards to the owner of somedomain.com without dabbling with the DNS.
The way things are now, I don't use ssl for my smaller projects at smallproject123.mydomain.com because I don't want the hassle of yet another cronjob and I sometimes don't want the subdomain to go into a public registry (where all certificates go these days).
That's absolutely unnecessary
Set a NS record for _acme-challenge.domain.tld to your own nameservers, e.g. ns1.myowndomain.tld
And have your own name servers only serve the _acme-challenge.domain.tld zone.
Now you can just use the RFC DNS updater with your ACME client without any need for credentials for the actual domain.tld zone.
I use this currently with my own kuschku.de domain, you can check it out.
dig +trace @8.8.8.8 _acme-challenge.kuschku.de ANY
So if you’re using AWS you get it for free. Or you can slap CloudFront or Cloudflare in front of your origin.
I think the barrier is low enough that I SSL all the things (including my small side projects).
Used to be everyone complained about CF putting SSL in front of HTTP origins.
However, CF can also issue a CF-signed certificate with a stupid long expiration for your origins[1] and validate it. This is how I fully SSL many of the things while avoiding potential headaches with LE / ACME. Combine with Authenticated Origin Pulls[2] and firewalling to CF's IP ranges[3] for further security.
Of course, that still leaves CF doing a MITM on all my things.
[1] https://blog.cloudflare.com/cloudflare-ca-encryption-origin/
[2] https://blog.cloudflare.com/protecting-the-origin-with-tls-a...
Static hosts like Netlify & GitHub also enable free SSLs. The barrier is so low most people trip over it.
I am sure there are still very unique edge cases though. If I had one of those edge cases I would sit down & really weigh the pros & cons though of not using HTTPS. I would not take it lightly.
"Free", but you can only use them on AWS stuff. AWS makes it nice and easy (and does a bunch behind the scenes for you). Part of that behind-the-scenes is that they have control of the private key on their side. You want to use the AWS generated cert locally, or on another provider, too bad.
Same here. If you have a domain then you should have a cert, it's not that hard today.
My wife wanted a website that's pictures of our dog as a joke, right now it's a single img tag. The second thing I did after that was getting an HTTPS cert and forcing redirection.
Would that work for mulitple domains? So I CNAME the _acme-challenge subdomain for all my domains to _acme-challenge.cheapthrowaway.com?
I used it on a pervious post to test it out and it seemed to be fine: https://github.com/benjojo/you-cant-curl-under-pressure/comm...
You can run it yourself locally, or trust (why?) the upstream's service.
I think you still need a steady hostname pointing to it, right?
This is easily testable. I view the website in both Chrome and Firefox, and it's http, not https.
Sure googletagmanager.com is in the preload list, but it doesn't have "mode": "force-https". It just has certificate pinning, not HSTS.
Sure there is Let's Encrypt and if you are facing Internet you are probably good to go.
If you are on an internal network, then good luck. You need to build a PKI, and then put into your devices the right certificate so that it is trusted.
If it was simpler, Apache would sing out its "It works!" in HTTPS and not HTTP.
Next you need to use ACME or Caddy (I use the latter) and tell it to do the Let's Encrypt DNS challenge using DuckDNS. It looks like this for Caddy:
# in the Caddyfile
tls {
dns duckdns
}
# in the CaddyEnvfile
DUCKDNS_TOKEN=your-api-key-goes-here
Then you start it like this:
nohup caddy -http-port 80 -conf /etc/caddy/Caddyfile -envfile /etc/caddy/CaddyEnvFile -agree -email you@email.com &That's it, now I can go to https://myRaspberryPi.duckdns.org and I've got HTTPS on my local network without anything exposed on the internet EXCEPT my device's internal IP. You've got to evaluate how much of a threat that is.
Fun fact: TLS doesn't require certificates, and some browsers even used to support HTTPS in these TLS modes many moons ago. See eg https://security.stackexchange.com/questions/23024/can-diffi...
How to set this up on a domain which is not connected to Internet? How is the check done?
Whereas the same server could tank 40k rps HTTP requests.
I have a 1 vCPU 2GB server that terminates TLS with dual Prime256v1/curve25519 + RSA 2048 setup with a 10 minute keepalive time, running AES 128, 256 (CPU has AES-NI), and CHACHA20-POLY1305 comfortably handling several millions of requests a day and CPU load hovering 10-20%.
The amount of ECC handshakes are surprisingly high, and CHACHA works wonders too with user agents today.
Given the threats from passive attacks today, this is a cost that must be paid. It just looks quite affordable with modern protocols.
Parent suggested that at 172 million requests per day (2000 rps), there would be trouble.
Assuming "several million" is <= 17 million (or even up to 34 million, given the 10-20% range stated), then your stats would tend to agree.
If a 16bit 200Mhz microprocessor can handle a few thousand connections/second, then a modern processor should definitely be able to stay upright fairly easily.
I am still skeptical TLS handshake on site visit is actually bogging down anyone’s computer.
For the average case it probably doesn't matter, and you can optimize it, but I think it is totally understandable that the average novice could end up with bad https performance if only because the defaults are bad or they made a mistake. If hardware assist for the handshake and/or transfer crypto is shut off (or unavailable, on lower-spec CPUs) your perf is going to tank real hard.
I ended up using ssh configured to use the weakest (fastest) crypto possible, because disabling crypto entirely was no longer an option. I controlled the entire network end to end so no real risk there - but obviously a dangerous tool to provide for insecure links.
Also worth keeping in mind that there are production scenarios now where people are pushing 1gb+ of data to all their servers on every deploy - iirc among others when Facebook compiles their entire site the executable is something like a gigabyte that needs to be pushed to thousands of frontends. If you're doing that over encrypted ssh you're wasting cycles which means wasting power and you're wasting that power on thousands of machines at once. Same would apply if the nodes pull the new executable down over HTTPS.
openssl speed ecdh
gatling -V -n -p 80 -u nobody
I know this is somewhat extreme, but on a cpu that was about 30% faster I got 40k rps for small files using the kernel's loopback, which is where the cpu spent most of it's time.Feel free to try.
Assume the worst way to attack without being clearly obvious: handshake CPU grinding.
So you are being forced to either not serve http, or to condition users to trust MITM-able redirect. How many people will notice a typoed redirect to an https page with a good certificate?
The solution is simple: browsers should default to https, and fall back to http if unavailable. Sure, some sites have broken https endpoints, but browsers have enforced crazier shit recently.
And going further, you can enable HSTS preloading, meaning the next release of browsers is going to hardcode your website as always and only ever going to be used with HTTPS.
See for example my domain https://hstspreload.org/?domain=kuschku.de, which is currently in the preload lists of all major browsers including Chrome, Firefox, Edge and even Internet Explorer.
I also deploy the same for mail submission with forced STS, and several other protocols.
Or, as I stated, for preload, you have to either not have HTTP at all, or have a redirect to HTTPS: it should be clear from my above post why I think a redirect is a bad idea. I also dislike turning off HTTP for those that don't have any other option.
To me it seems that browsers just switching to https-by-default and http-as-fallback is a much simpler, better, backwards-compatible change that should just work. What am I missing and why do you feel HSTS is a good idea compared to that?
(Exception being if you use the dns challenge)
Exactly. DNS challenges don't suffer from this issue.
nature.com is marked as Chinese, as are nginx.org and ntp.org.
example.com is Indian in the list as is the now defunct dmoz.org.
I don't understand the methodology behind the country assignments at all…
Must be a bug.
>an expectation that a site responds to an HTTP request over the insecure scheme with either a 301 or 302
Doing things this way is the final nail in the coffin for Internet Explorer 6, since IE6 does not use any version of SSL which is considered secure here in 2019. And, yes, I have seen in people the real world still using ancient Internet Explorer 6 as recently as 2015, and Windows XP as recently as 2017.
Which is why I instead do the http → https redirection with Javascript: I make sure the client isn’t using an ancient version of Internet Explorer, then use Javascript to move them to the https version of my website. This way, anyone using a modern secure browser gets redirected to the https site, while people using ancient IE can still use my site over http.
(No, I do not make any real attempt to have my HTML or CSS be compatible with IE6, except with https://samiam.org/resume/ and I am glad the nonsense about “pixel perfect” and Flash websites is a thing of the past with mobile everywhere)
If you don't do this to get SHA-1 then you're relying on the users somehow having applied enough updates to not need SHA-1 but for some reason insisting on IE6 anyway. That's a narrower set of users. At some point you have to cut your losses.
The task is not as simple as using DNS to store strict https flags(as DNS can be manipulated by intermediary), but hardcoding the lists in the browsers and keeping the lists in the chrome's code is definitely not a solution.
e.g. in the past it was just domains and subdomains.
Today there are already some TLDs on the list themselves.
A lot of websites just don't serve over HTTPS, or serve them with domains whose CN or SAN don't match the host.
Many that do support https have links that downgrade you back to http on the same domain.
If nothing else works, temporarily disabling the firewall is a couple clicks away, barely takes any time or effort at all.
I don't know why people are making such a fuss out of this.
The redirects are also hard, I have a static site using Google storage and I have to create a server instance and redirect from there because it's not possible to do an automatic redirect. I don't know why the big cloud hosting providers aren't cooperating to make full https implementation easier.
PKI is technically the best practice for these systems, but it's also the most fragile and complicated. At a certain point, if the security model is so complex that it becomes hard to reason about, it's arguable that it's no longer a secure model, to say nothing of operational reliability.
I also have a whole rant about how some business models and government regulations literally require inspecting TLS certs of critical transport streams, and how the protocols are designed only to prevent this, and all the many problems this presents as a result, but I don't think most people care about those concerns.
Oh, and gentle reminder that there are still 100% effective attacks that allow automated generation of valid certs for domains you don't control. It doesn't happen frequently (that we know of) but it has happened multiple times in the past decade, so just having a secure connection to a website doesn't mean it's actually secure.
Security of the data transfer layer does not mean can or should trust the website you are visiting.
Just because a website has a padlock does not mean it is trust worthy and you can hand over your CC details.
https://www.amazon.somethiing.other.co/greatDiscount may look great to some!
It's already effectively how password form submissions work in many browsers.
Same with my IoT cameras and all the various local apps I run that can start a web server. Heck, my iPhone has tons of apps that start webservers for uploading data since iPhone's file sync sucks so bad.
We need a solution to HTTPS for devices inside home networks.
I've seen TV adverts from banks for example (Here in the UK) telling people to look for the padlock! This is not a verifiable method of safety.
http: insecure and https: secure
probably only when http ceases to exist can we start differentiating between trustworthy and untrustworty.
how we actually do that is something we still need to figure out.
for now we have a check against sites that are known to distribute malware. maybe we need to somehow track which sites are known to be trustworthy.
different factors can go into that. their privacy statement, past incidents and their response. etc...
MITM can do anything to your site, so your totally-static site may not be static any more at the victim's end. It may be a site collecting private details, attacking the browser, or using the victim to attack other sites.
Your static HTTP site is a network vulnerability and a blank slate for the attacker.
TL;DR: Secure websites can make the web less accessible for those who rely on metered satellite internet (and I'm sure plenty of other cases).
Providing access to Wikipedia over http to people in third world countries may be worth the risk of someone MITMing the site with propaganda.
The suggestion is only to give some users the option.
That casual dismissal of davidmurdoch's counterargument comes across tone-deaf to people stuck on crappy connections.
[0]: https://wicg.github.io/webpackage/draft-yasskin-http-origin-...
- If you are hosting a simple static page or blog, your hosting provider probably has Let's Encrypt plugin.
- If you have your own VPS, Caddy has you covered with file serving, fastcgi support for PHP, and proxying to (g)unicorn/nodejs/Go/.NET, and has HTTPS enabled by default.
- If you have more advanced setup (e.g. containers), traefik supports HTTPS with just a few lines of configuration.
- If you are big enough to afford cloud, it takes a few lines of Terraform code to provision certificate for load balancers (speaking for AWS, and assuming others have similar solutions).
For other cases (e.g. lots of traffic with custom haproxy/nginx/etc. setup), you are probably smart enough to find out how to enable Let's Encrypt support.
2) Some services require wildcards, like proxies.
3) Some organizations have, due to someone far away making strange decisions, policies about certificate authorities, and people to audit for compliance. Therefore, a cert costs money and, for a site which is purely informational, that's a hard sell.
4) Because we're not running on a hosting provider, a VPS, containers, or cloud.
5) Because not everyone wants to deal with some combination of the above every three months due to Let's Encrypt's expiration policy.
It's very very nearly maintenance free [1].
[0] There's lots of tooling. My current preference is for https://github.com/lukas2511/dehydrated
[1] If something breaks you have to pay attention, otherwise... Not so much.
- Setup: https://github.com/susam/susam.in/blob/master/Makefile#L30-L...
- Renewal: https://github.com/susam/susam.in/blob/master/etc/crontab#L1
And "Let's Encrypt" is not an answer to "HTTPS is not free". It's not. We all are going to see our projects outlive Let's Encrypt (or their free tier).
In the end, nothing is secure. A dedicated attacker will find a way, given enough resources. Any security measure is just a deterrent.
My deterrent is that it's not worth MITM'ing my personal website with, like, 10 monthly visitors. (The reader might gasp that I lock my bicycle with a chain that can be snapped in a second, and that a strong enough human can probably bash my home door in).
Anyway. It's almost 2020, and if you are still advocating on moving the entirety of the Web to reliance on Big Centrally Good Guys, I really don't know what else to say to you.
Sure, depending on your setup it's easy, but for a lot of setups it isn't. Instead of trying to say HTTPS is easy and shame everybody who isn't doing it more efforts should be diverted into creating an actual fully encrypted network that doesn't need CAs.
I'm guessing people aren't as lucky as I am to be running on newer machines and such.
I mean it even edits your nginx files to redirect http to https if you agree. It's not hard.
Up to date documentation was near-impossible to find, and the scripts that came out of the box on the recommended client needed some fixing. The whole thing took about half a day, plus some hours a few weeks later once the unforgiving anti-abuse thresholds I accidentally triggered during end-to-end testing finally expired. Definitely wasn't a pleasant experience.
It suddenly becomes really, really complicated if you have multiple servers, multiple domains, nginx configurations that the tool does not expect (but insists on rewriting).
I don't know how it could possibly be any simpler.
My employer won't use Let's Encrypt because they (LE) want unlimited indemnity and that's a deal breaker for them (employer).
What i cannot stand is people who can do it, but refuse to out of laziness. Or because they want their content to be insecure on purpose.
This applies mostly to big orgs, so indie devs can have some leeway if it's too hard to implement.
(Raises guilty hand)
I run a couple of sites on my hosted server that are still http. They both sit behind a varnish setup and to be honest I just have not found the time to get it done. Usually when I mess with my configurations I lose a week to troubleshooting stupid stuff and I just can't bring myself to do it.
I currently use a mini CDN (content delivery network) of three different OpenVZ servers in the cloud to host my content, so getting things to work with Let’s Encrypt took about two or three days of writing Bash and Ansible scripts which get the challenge-response from Let’s Encrypt, uploading it to all my cloud nodes, having Let’s Encrypt verify it got a good response, uploading the new cert to all of the cloud nodes, then using Ansible to log in to all the nodes, put the new cert where the web server can see it, then restarting the web server.
Point being, the amount of effort needed to get things to work with Let’s Encrypt varies, and can be non-trivial.
Still, the stand-alone mode is pretty dang easy. I've also considered the /.well-known mode but there was some tiny snag.
I made my own security: http://talk.binarytask.com
It's just "single serving server salt" (try saying that fast 3 times) sent to "client for secret hashing" and then "sent back to server again", so it's insecure on registration (just like all security with MITM without common pre-shared secret) but after that it's pretty rock solid, even quantum safe. Requires two request/responses per auth. though.
This tech is nothing new and has been used by many big actors since forever. It's simpler that public/private encryption because it only requires hashing math to work.
It should be my choice to use whatever encryption I want without having google scare away my customers with "Not Secure".
Lots of US sites on their NO HTTPS list come up in Safari as HTTPS. Rutgers.edu for example.
I host a single site on a host (so, no login, subject name or path information to leak), which only contains details how to connect to my irc server at the same address.
If the message is altered then the most pain anyone will have is connecting somewhere else for the first time. (They won’t be automatically logging in if they’re using this page).
Why does everything need to be TLS? It feels like a cargo cult. A requirement: “because!”
In other scenarios it’s worth modelling threats and I agree that it’s good to err on the side of caution but aside from the modification of my connection information there’s no good tangible reason to incur an overhead in administration.
Although it should be noted; part of the reason that web server even exists is to do letsencrypt for a globally geobalanced irc network.
Traditionally, people have only encrypted things that are deemed sensitive (logins, money, health). However, when the majority of traffic is non-encrypted, actually ciphered data is very noticeable to anyone monitoring the network, and it screams "look at me! I am important!".
However, when >90% of traffic of the Internet is encrypted, then there is no 'extra' information to be gained from that fact. If further forces any surveillance program to expend extra resources to either trying decrypting everything, or choose to only focus on those people that it actually deems important, instead of wholesale surveillance of the entire population.
Further, encrypting content prevents it from being modified, reducing your potential to be leverage against:
> The Great Cannon of China is an Internet attack tool that is used to launch distributed denial-of-service attacks on websites by performing a man-in-the-middle attack on large amounts of web traffic and injecting code which causes the end-user's web browsers to flood traffic to targeted websites.[1]
That's kinda my argument, not that https is bad. I agree with widespread adoption and taking it as a default even for a static page.
But in my environment I have many dozens of nodes and idk where letsencrypt is going to come in because of geobalanced DNS. I also serve many domains with this project so I don't have the nice DNS-01 ACME verification features because not all DNS providers have an API.
So I have a web server on each node, which reverse proxies .well-known/ to some central server that runs certbot. Then I distribute those certs outwards to those nodes.
It goes against certain sysadmin principles about transportation of private key materials, but it's what works.
But; given that architecture which caters for a latency sensitive product; letsencrypt is a serious overhead. To the point where I'm considering going back to 2y paid certs.
If the message is altered then the most pain
anyone will have is connecting somewhere else
for the first time
If the page is altered so it loads 3rd party tracking code, then the pain is to be tracked.If the page is altered so it opens a "Please enter your ebay login" phishing site in the background, a user might switch tabs, think "Oh, I logged out of ebay somehow" and enter their password into the attackers site. Exposing them to the pain of ecommerce fraud.
If the page is altered to use a 0-day exploit, the pain is to have a zombie machine afterwards.
Etc etc ...
If you are connecting to a "Free Public WiFi" and the malicious actor is the one broadcasting the access-point; it's even easier to MITM you.
Without Cert & Key Pinning your employee laptop can be MITM by corporate to eavesdrop on all of your HTTPS traffic. The browser will show that the connection is secure, but it isn't. When you pin the cert and key - even with a compromised corporate computer - the insecure site warning will show and you'll be alerted to the fuckery.
> Doing things this way is the final nail in the coffin for Internet Explorer 6
- Fucking great! Nothing else to say here.
> handshakes take enormous amounts of CPU
- This is vastly overstated (enormous?). Also, this is called a tradeoff. Security isn't free in time, money, or performance.
> Preloads list is an absolute kludge that does not and will never scale... and works only for specific browser
- The preload list, right now, is 10.6mb and contains 90,862 entries. This seems to function and scale just fine. Seeding your browser with known values is really the best way to do this until 99.X% of web traffic is provided over HTTPS... Also Chrome, Firefox, Safari, IE/Edge, and Opera make up 98% of all browser traffic today and they have all supported this standard for years.
> The biggest problem with forcing everything HTTPS is a false sense of security.
- Defense in depth. Layering security controls is the only way to go. Also; this is some crazy mental gymnastics to take the position "wearing a seatbelt is a false sense of security because you can still crash".
> Because it's hard and a pain.
- Feeling that pain is offset onto the attackers trying to compromise your site. If you don't feel the pain; they don't either.
> Secure websites can make the web less accessible for those who rely on metered satellite internet... TLS 1.3 with 1-RTT should improve this situation.
- Even if your entire business depended upon delivering data to metered satellite internet users; the risk outweighs the cost when not encrypting your traffic. WARNING: DON'T IMPLEMENT 0-RTT OR 1-RTT WITHOUT UNDERSTANDING YOUR APPLICATION-SPECIFIC REQUIREMENTS. You can really fuck this up by not properly managing tokens between your webserver and application layer. Not recommended.
> I don't get it. With Lets Encrypt, it's like one or two lines to get everything set up.
- True, but it get's confusing really fast if you don't 100% match the certbot use-case.
> HTTPS is not an obligation.
- For 99% of people running businesses; it is.
> Recently an OpenShift cluster I admin went down because of long-lived certs not being rotated in time.
- If you have had certbot running for a long time I would suggest you check your server logs TODAY and make sure your cron job is still working correctly. Recently there was a change with the certbot acme version requirement and your reissue might be failing. Seriously, take a quick look right now.
> Because frankly, I neither trust letsencrypt nor the certificate authority system in general... but won't help against industrial (e)spionage
- Places tinfoil hat on... you're not wrong.
http://webcache.googleusercontent.com/search?q=cache:t_oVSNu...
https://www.troyhunt.com/heres-why-your-static-website-needs...
The same website to my surprise has an article on why this is faulty reasoning.
If I set up a purely static HTTP-only site in 1998, it would still work with today's browsers, more than 20 years later.
If I set up a purely static HTTPS-only site in 1998, and didn't follow the upgrade treadmill, it would have stopped working for modern browsers some time ago.
Having to set up a "certificate" for that would be an unacceptable burden.
Have you ever posted a link to your site anywhere? Imagine you sent me a post card saying "Come to my beach to look at my cool sandcastle" and then when I got there the sandcastle was actually a robot that stole my credit card.
You could say that it wasn't your fault - somebody broke into your private beach and replaced the sandcastle.
But I would probably still blame you for not securing the area and double-checking the contents before inviting people. Even if I didn't blame you, I probably wouldn't respond to another invitation.
> Horseshit. Users must keep themselves safe. Software can't ever do that for you. Users are on their own to ensure they use a quality web client, on a computer they're reasonably sure is well-maintained, over an internet connection that is not run by people who hate them. None of the packets I send out are unsafe, so my site does not need HTTPS.
> None of those things are my problem. If people don't want to see my site with random trash inserted into it, they can choose not to access it through broken and/or compromised networks. If other website operators are concerned about this sort of thing, they are free to use HTTPS, but I have no reason to do so. Encryption should be available to anyone who wants to serve encrypted content, but I have no interest in using it for my website. It's a shame that people are using web browsers (note: not my website, but BROWSERS) as attack vectors. The legions of browser programmers employed by Mozilla, Google, Apple, and Microsoft should do something about that. It's not my flaw to fix, because it's a problem with the clients. My site does not need HTTPS.
> Earlier you recommended letsencrypt, and now suddenly you want me to pick a competent certificate authority? The only reason they didn't leak my info already is because my site does not need HTTPS.
> Obviously my site does not display ads; as has [been pointed out][https://news.ycombinator.com/item?id=14666391], It does not even appear to be monetized. This is because I have a real job and the entire web ad industry can fuck itself off a cliff. So, while mixed-content warnings are pretty obnoxious, my site does not need HTTPS.
That part about the web ad industry is entirely correct.
While I like n-gate's no-bullshit attitude, he seriously needs to check his privileges.