The issue lies between the browsers and https system. SSH can do encryption without requiring identity verification. It handles it by asking "Do you want to trust this new server?". Then if it changes informs you of that. Browsers could easily implement that for .local with self-signed certs.
Of course browser developers assume everyone has internet all the time and you only access servers with signed domains. I’ve wondered what it’d take to get an ITEF/W3C RFQ published for .local self-signed behavior.
(Edit: RFQ, not my autocomplete’s RTF)
For these types of sites we run a local CA, and sign regular certificates for these domains and then distribute the CA certificate to our windows clients through a GPO. When put into the correct store, all our "locally-signed" certificates show as valid.
In other instances, where I haven't been able to do that, like for disparate VPN clients and such I will generally assign a RFC1918 address to it. Like service.vpn.ourdomain.com resolves to 10.92.83.200. As long as I can respond to a DNS challenge, I can still get a letsencrypt certificate for that domain.
Asking the end user to accept downgraded security is a huge security antipattern.
Also, if I’m operating an evil wifi AP at a coffee shop and I intercept your web request for bankofamerica.com with a redirect to bankofamerica.local, would HSTS prevent the redirect? Or could I then serve you a bad cert and trick you into accepting it?
Also, what sokoloff said makes a lot of sense. Encryption without authentication is worthless, and that cert chain only works in so far as someone at the top vouches for someone’s identity. If that’s your print server, then you are the one vouching for its identity. It makes more sense for you to be the certificate authority and just build your own cert chain.
My browser warns me, I can accept the warning for that particular certificate, and it warns me again if it changes..
It took my a little over an hour one evening to figure out how to create my own CA, trust it, and sign certs for all my local devices (except my UniFi cloud controller which I admit I gave up on due to time).
The problem is to figure out whether to trust the server you need to get its fingerprint through another channel. Is there an HTTPS equivalent of that?
That's basically how it works though; your OS packages a group of trusted CA certs. You can add additional trusted CA certs, even ones minted by you to ensure your apps trust the connection
I'm not sure self-signed HTTPS can do much better than this anyways.
(Yes, yes, it's a crazy idea, hehe)
Sorry for the mostly insubstantial comment but it may help you in the future: it’s RFC (Request For Comments) not RFQ.
And it’s IETF (Internet Engineering Task Force) not ITEF.
You can also get a certificate through the Let's Encrypt DNS challenge without having to expose a server to the Internet, but you'll still need ownership of a domain name and either an internet connection or a local DNS server to support HTTPS using that certificate.
There is always the option of creating a local certificate authority for your devices, but this is kind of a pain. There are some new applications that aim to make this easier [2], but there is no easy way around having to install the root certificate on each device.
[1] https://blog.filippo.io/how-plex-is-doing-https-for-all-its-... [2] https://github.com/smallstep/certificates
For example, you could have serialnumber.manufactuerer-homedevices.net, and each device would get a cert for its serial's host name. Ideally, you should properly secure that API with some form of attestation key included on the device. Alternatively, the host name could be e.g. the hash of the devices' generated key (that way you could ship the devices without placing individual keys on them, but the host name would change after a factory reset).
Making this actually secure is hard, though, because you need the user to visit the URL for his device. If an attacker can simply get a cert for differentserial.manufacturer-homedevices.net and direct the victim there, you don't win much actual security.
Your device connects out with some kind of persistent connection to their central service then requests to your device go to their server, which does AAA and routes to your local device. Fixes the SSL issue, avoids any NAT headaches, enables fully remote access and most importantly for PMs it makes the device useless without your server-side components. If there is any local accessibility at all, it can be neutered or reduced.
I don't entirely hate this model, its not my favorite, but, its the way things are going.
Why would I even bother copying and distributing self-signed certificates if I can just properly get a certificate for my own personal router?
It’s idiotic that people still trust pure HTTP and have no option of switching.
If your vendor device or software doesn't support automated certificate rotation, put nginx/haproxy/envoy in front of it.
The only way I see how this would work is if you not just purchase a domain but also an internet-facing server and do the renewal and certificate management centrally for all devices - at which point, your device is definitly not standalone anymore.
It would be enough to send it as an intermediate CA cert, no need to install.
Going the self-signed DNS name restricted CA way would likely still not fly with browsers, because there's no way to securely deploy the trust root. (Because if it requires user interaction to install that can be exploited by malicious actors.)
https://chromium.googlesource.com/chromium/src/+/ae4d6809912...
https://cabforum.org/2017/02/24/ballot-185-limiting-lifetime...
https://archive.cabforum.org/pipermail/servercert-wg/2019-Se...
https://ccadb-public.secure.force.com/mozillacommunications/...
> SUB ITEM 3.1: Limit TLS Certificates to 398-day validity Last year there was a CA/Browser Forum ballot to set a 398-day maximum validity for TLS certificates. Mozilla voted in favor, but the ballot failed due to a lack of support from CAs. Since then, Apple announced they plan to require that TLS certificates issued on or after September 1, 2020 must not have a validity period greater than 398 days, treating certificates longer than that as a Root Policy violation as well as technically enforcing that they are not accepted. We would like to take your CA’s current situation into account regarding the earliest date when your CA will be able to implement changes to limit new TLS certificates to a maximum 398-day validity period.
CCADB is a totally different service run by Mozilla and Microsoft (using Salesforce, I presume because they both agree this is terrible but neither can accuse the other of using their preferred pet technologies?) notionally open to other trust stores to track lots of tedious paperwork for the relationship with trusted CAs. Audit documents, huge lists of what was issued by who and to do what, when it expires, blah blah blah. Like a public records office it's simultaneously fascinating and a total snooze fest. Mozilla is using it in this case to conduct their routine survey of CAs to check they understand what they're obliged to do, they're not asleep at the wheel and so on.
This is so an arbitrary decision and so much a pain in the ass. Again, a limited number of people used their corporate interests to decide for the whole world with almost no discussion.
The worst is that the "security" argument for this change is quite weak. Yes, we can think that shorter certificates are a little bit better to trust for the user, but that should be the choice of the website that you visit.
Now, you as an user are so stupid, that browsers will decide for you what website is deemed safe for you to visit, the same as with appstores. Compared to the good old time, like traditional pc software installation, where it was you, the user that was free to decide the websites that you wanted to trust: google.com vs myshaddyfraudyweb.com
This is a clear security win, and thus good for users. And no, I don't trust websites to have my best interests in mind, not remotely. Hell, if browsers hadn't started warning about insecure connections then I suspect that even to this day most websites would still be insecure. We used to leave it up to the choice of each website, and that was a clear failure, and now they're being forced to provide better security, which is a clear win.
Moreover, this didn't come from CA/B anyway, it was rejected there. CA/B agreed the previous 825 day limit, and the 39 month limit before that, but this new rule did not get support at CA/B so Apple imposed it unilaterally (and with some really poor communication but whatever).
Google and Mozilla have just decided that since they wanted this limit, and Apple has effectively imposed it anyway, they might as well go along for the ride.
People can barely tell whether it's really microsoft calling them saying their computer is infected. What makes you think they'll be able to tell the difference between google.com and google-secure-login.com, or whether they should download the "codec pack" that their shady streaming site is offering?
That sounds like a disagreement; it benefits the user, so let the website opt out? Because websites are known to have users' well-being in mind?
I would think the choice on how long to trust a certificate should be on the user, possibly using the hint that the creator of the certificate gave. You wouldn’t trust a certificate from evil-empire.com, no matter its expiration date, would you?
The discussion should be about whether the browser should make that decision on behalf of the user. I’m not sure I’m in favor of that. On the other hand, browsers already do a lot in that domain, for example by their choice of trusted root certificates (and changes to that list)
So in the end, websites determine their 'trust value' without the browsers 'police', that will let the possibility for special cases.
For example, if I do a device that is to be used out of internet for 3 years, logically the user will not see an issue with a 5 years certificate.
So bad those times are over and we have this browser cartell enforcing some basic security standards for TLS. Screw them!
https://chromium.googlesource.com/chromium/src/+/ae4d6809912...
// For certificates issued on-or-after the BR effective date of 1 July 2012:
// 60 months.
// For certificates issued on-or-after 1 April 2015: 39 months.
// For certificates issued on-or-after 1 March 2018: 825 days.
// For certificates issued on-or-after 1 September 2020: 398 days.
The source code also requires certificates issued before 1 July 2012 to expire on Jul 1st, 2019 at the latest.Does that mean that next May, for the first time ever, the domains of all HTTPS sites on the web will be recorded in a public log? I think the only caveat to that is wildcard certificates.
[0] https://www.feistyduck.com/bulletproof-tls-newsletter/issue_...
Although the Chrome mandate only technically kicked in on 30 April in practice most CAs were considerably ahead of that date, in addition some of the logs are open to third parties uploading old certificates, Google even operates logs that deliberately accept certain untrustworthy certificates, just because it's interesting to collect them.
If you're excited to know what names exist, the Passive DNS suppliers can give you that information for a price today, their records will tell you about names that aren't associated with any type of certificate, and lots of other potentially valuable Business Intelligence. They aren't cheap though, whereas harvesting all of CT is fairly cheap, you can spin up a few k8s workers that collect it all and store it wherever (this is one of the tasks I did in my last job).
The CABF has talked about doing this before, most recently in SC22 (https://cabforum.org/2019/09/10/ballot-sc22-reduce-certifica...). In that case all browsers supported it, but it wasn't passed by the CA side.
It's possible and free for small players to use letsencrypt, that still takes some time to set up, manage and maintain over time.
Without automation, you've got an annual chore to do or your site goes offline.
I think some hosts are already starting to offer free and easy SSL certs to their small customers, but I do expect automated SSL management to be generally available for the masses before this takes effect.
Much better to have a separate central cert management system that handles renewals and pushes the certs outwards to the DMZ systems.
How is HTTP harmful when you visit my website about amateur radio? An expired cert is no more harmful than bare http in this non-commercial non-institional personal context. It's the one being discussed in this sub-thread in case you missed it and assumed the normal HN business context.
The burden is real and completely unecessary for personal websites. This makes the web more commercial by imposing commercial requirements on everyone.
It's what killed off self-signing as a speed bump against massive surveillance and centralized everyone into the benign dictactorship of letsencrypt. But centralization will lead to problems when money is involved. Just look at dot org.
The real harm comes from this fetishism of commercial/institutional security models.
Even so, this doesn't actually change much. I've never bought a certificate valid for more than a year. I'm not aware of any major player that sells certificates valid for more than a year. So this rule has existed for a long time in practice, but is only now being codified.
It's PKI for Let's Encrypt certificates. Helps you issue, renew, revoke certs from a central place. Also get alerts so you know when things have changed, expired, failed to renew.
While a lot of places give you certs built in, there's a whole world of places you still need certs. Like FTP, mail, behind load balancers, disparate environments and systems, etc.
In the future, I'm planning on creating a way to automate the certificate exchange process. This should help with using and exchanging certs used in client authentication and things like SAML SSO. If expiration get down to a month or less, I see a need for a system to help do all of these things and more.
Small players can easily get certificates manually or automate. The platforms/tools they use often give certificates out of the box (cloudflare, heroku, wordpress, etc...).
Large players can't manage certificates. Developers/sysadmins can't use let's encrypt because it's prohibited by higher up and blocked. Even if they could use it, it's not supported by their older tools and devices. The last large company I worked for had no automation around certificates and the department that handled certificate requests was sabotaging attempts to automate, possibly out of fear of losing their jobs.
I’d say it takes less time than going through a single paid certificate store… Assuming you already have a tool. If you don’t, then maybe it’s the same or 5 minutes more.
1) In my experience the user experience even for technical admins is still flakey on at least some popular platforms. In other words, it's not as incredible as you think.
2) It's not available to a host that doesn't connect to the internet but does occasionally get connected to by a local browser (eg. IoT firewalled inside my LAN is one obvious such case; I'm sure there are others).
And most importantly:
3) You'd have to be insane or naive to accept an architecture that leaves you dependant on a single vendor (especially if you need that vendor more than they need you!).
Unless they set up LtE for their customers
(And as much as I like LtE I think it's complicate to depend in one issuer only)
If you want to run a webserver but are unable to set up a cronjob that does
certbot renew
you don't deserve external users. Full stop.If it's just you and you don't care about your own security, then do whatever you want in your own browser.
It’s shit attitudes like this that killed the old internet we all loved
Unless refreshed by active learning, aka someone doing the refresh job.
Or unless delegating the work to large players—either the memory or the hosting.
EDIT: This feels wrong, even when done for right reasons. And I wonder whether this would fly without LE and whether this means we are officially making LE THE critical part of Internet infrastructure.
"Get off my Internet lawn if you can't be up to date" is what we're saying and I just do wonder whether we haven't exchanged too much of accessibility for too little of security.
This approach is more reliable than cron in case of failures/errors. Not only are there fewer moving parts, Caddy's error handling logic and retries are smarter than just "try again in <interval>".
Certificates are encouraged to be of shorter lengths as it reduces their potential for abuse. If compromised, a certificate with a long lifespan could be used for years without anyone noticing. A system which doesn't check for revocation is especially vulnerable (though of course, browsers do).
Let's Encrypt certificates are only valid three months, which works well because it's largely automated. It would be good to extend that philosophy elsewhere: automation, and with shorter cycles.
Note the actual limit is 398 days, which gives a small buffer over 1 year.
It will also increase the number of errors. The more times a thing is done increases the total number of errors occurring doing that thing.
You're right that the absolute number of errors will certainly rise, but the fraction of attempts which have errors will likely fall. As legacy certs expire, the aggregate quality of certs will likely be higher.
A secondary question is whether the gain in security is worth the required effort. Obviously Apple believes this, and LetsEncrypt is pretty easy, so even for hobbyists, it's probably at worst an annoyance.
Even if it's not an automated process (which I think this encourages), then it's easier to keep your skills sharpened by doing something more often.
Would Mozilla have accidentally forgotten to renew their browser certificate recently if it were a more frequent task? It's hard to say, but I think it's likely there'd be a stronger procedure in place. There would need to be.
to be fair, there's already a the concept of certificate revocation list and OCSP (on-line certificate status protocol) that helps in order to check the validity of a certificate (that is, whether it has been revoked or not).
While short-lived certificates are fine for letsencrypt, pushing the same for the rest of the world looks a bit like an abuse to me.
That is one of the main reasons for LE's short lifespan. Certificate revocation is not reliable in practice.
Even if you use your own PKI, if your certs have a validity > 1 year, won't browsers still complain?
They’ll just assume that because it was untrusted the first time, that cert errors are normal and ignore it. Especially since they will have a “first use” for every new device and every new browser they visit with.
If you only have one domain it isn't and issue as you can just go get a certificate somewhere else. But if you have 1000+ domains it's an issue.
Most old school CAs do domain validations against the root of the domain, so it's a lot harder to accidentally delegate that.
That's not a reason not to use LetsEncrypt, but it's a reason not to include it in certificate pinning.
If someone can intercept traffic to your server IP, they can get a Let’s Encrypt certificate. If they can’t reliably man in the middle that IP, then HTTP is reasonably secure already.
Such “certificates without certification” This is one reason browsers have added new UI elements for certified domains.
Intermediate certificates have shorter lifetimes. Even though they're kept online, they're also stored in HSMs. Even if the CA were compromised, the chance of the private key itself leaking is very small.
End user certificates, on the other hand, are usually handled much more cavalierly. Sure, you could store the key in an HSM, but most servers just keep them in memory (and in the file system). A server certificate's key is far, far more likely to be compromised than a CA key.
I would like to create a self-signed CA with a name-constraint for certain internal (sub)domains, and have my browser trust the CA. And have it sign end-host certificates. And have httpd use those certificates (or certificate chains) such that the end result is a trusted HTTPS connection I don't have to click-through Advanced every time.
Is there a collection of PKI software that makes this remotely easy to do? OpenSSL objectively does not.
I have a good understanding of public and secret key cryptography as well as hash functions and other primitives, but I don't understand any of how PKI works — it's just crazy complicated for what it seems to do.
Pretty easy to use.
Letsencrypt with DNS validator also works great for servers that aren’t accessible externally.
https://sslretail.com/news/ssl-validity-limiting-to-one-year...
Totally speculating here that 30 days is probably the earliest one can renew a yearly cert.
Without this extra margin there'd be an incentive to cut it as fine as possible on renewal (or even not renew until the expiry causes problems) which is bad for security, bad for business continuity and bad for the CA businesses.
The practice of adding unused time to new certificates goes back a long way and probably is a business practice copied from other things you need to renew in this way. After the CA/B Forum came into existence they standardised a limit of 39 months (3 years + 3 months) to support this existing business practice while forbidding new very long lived certificates, this didn't take effect immediately, instead it was allowed to phase in by 2015.
That limit is a bit vague, which wasn't good. Machines don't really do vague, you can see what Chromium does about that in the linked source code - they pick 1188 days as "39 months" on the argument that while 39 months might sometimes be shorter than 1188 days it can't be longer.
In 2018 the CA/B forum agreed a new limit, 825 days, the specification in days is to avoid vagueness, 825 is two years plus three months plus a very generous allowance for various holidays and other accidents and I think that getting votes for 825 days was judged better than losing votes for some slightly shorted lifespan like 798 days.
Proposals to further reduce this year or next year fell through and apparently Apple decided that rather than negotiate they'd take the nuclear option, which is always something they could do. With Apple eating the PR cost there's no reason why Chromium shouldn't enforce the same limit.
---
cf. https://support.apple.com/en-us/HT211025:
> This change will affect only TLS server certificates issued from the Root CAs preinstalled with iOS, iPadOS, macOS, watchOS, and tvOS.
> This change will not affect certificates issued from user-added or administrator-added Root CAs.
Fortunately not enforced for currently issued certs.
Will this ever be part of the TLS spec?
I'm pretty sure TLS itself doesn't specify anything about certificate lifetimes. I could be wrong; I have actually read it, but as sibling comment notes, TLS is used in a lot more places than browsers, including mutual TLS between random services that don't use an external CA at all.
Letsencrypt is free and easy to replace (it's automatic, and takes maybe 5 minutes to set up on a new server). EV certificates might be harder, but I've heard good things about certsimple.
EV certs are completely worthless (as in they provide no extra value above that provided by regular DV certs) so nobody should care if they're harder to obtain.
This is a security theater and I think it's intended to make TLS maintenance unbearable for non-IT businesses and to push them to cloud hosting providers like Google Cloud and Cloudflare.
Also latest drafts of TLS ESNI/ECH feature were written by Cloudflare for Cloudflare's needs.
Ideally, something more like 1 hour - like a JWT - would be nice, but not particularly practical as you need to allow some margin for incorrect local clocks time
Aside from not spamming the CT log and possibly making it easier to offload the generation of the OCSP responses to a more efficient architecture than the one needed to issue certificates, I'm not sure how mandatory OCSP stapling is better than just reissuing the certificate every day/week.
How does that work with a largely automated process like Let's Encrypt?