lego --domains 206.189.27.68 --accept-tos --http --disable-cn run --profile shortlived
[1] https://go-acme.github.io/lego/(seems to be WIP https://github.com/caddyserver/caddy/issues/7399)
https://github.com/certbot/certbot/pull/10370 showed that a proof of concept is viable with relatively few changes, though it was vibe coded and abandoned (but at least the submitter did so in good faith and collaboratively) :/ Change management and backwards compatibility seem to be the main considerations at the moment.
It allowed me to quickly obtain a couple of IP certificates to test with. I updated my simple TLS certificate checker (https://certcheck.sh) to support checking IP certificates (IPv4 only for now).
A properly configured DoH server (perhaps running unbound) with a properly constructed configuration profile which included a DoH FQDN with a proper certificate would not work in iOS.
The reason, it turns out, is that iOS insisted that both the FQDN and the IP have proper certificates.
This is why the configuration profiles from big organizations like dns4eu and nextdns would work properly when, for instance, installed on an iphone ... but your own personal DoH server (and profile) would not.
- 8 is a lucky number and a power of 2
- 8 lets me refresh weekly and have a fixed day of the week to check whether there was some API 429 timeout
- 6 is the value of every digit in the number of the beast
- I just don't like 6!
There’s your answer.
6 days means on a long enough enough timeframe the load will end up evenly distributed across a week.
8 days would result in things getting hammered on specific days of the week.
people will put */5 in cron and result will be same, because that's obvious, easy and nice number.
And 160 is the sum of the first 11 primes, as well as the sum of the cubes of the first three primes!
200 would be a nice round number that gets you to 8 1/3 days, so it comes with the benefits of weekly rotation.
The CA/B Forum defines a "short-lived" certificate as 7 days, which has some reduced requirements on revocation that we want. That time, in turn, was chosen based on previous requirements on OCSP responses.
We chose a value that's under the maximum, which we do in general, to make sure we have some wiggle room. https://bugzilla.mozilla.org/show_bug.cgi?id=1715455 is one example of why.
Those are based on a rough idea that responding to any incident (outage, etc) might take a day or two, so (assuming renewal of certificate or OCSP response midway through lifetime) you need at least 2 days for incident response + another day to resign everything, so your lifetime needs to be at least 6 days, and then the requirement is rounded up to another day (to allow the wiggle, as previously mentioned).
Plus, in general, we don't want to align to things like days or weeks or months, or else you can get "resonant frequency" type problems.
We've always struggled with people doing things like renewing on a cronjob at midnight on the 1st monday of the month, which leads to huge traffic surges. I spend more time than I'd like convincing people to update their cronjobs to run at a randomized time.
* https://datatracker.ietf.org/doc/html/rfc9799
* https://onionservices.torproject.org/research/appendixes/acm...
I think acme.sh supports it though.
We also support ACME profiles (required for short lived certs) as of v1.18 which is our oldest currently supported[1] version.
We've got some basic docs[2] available. Profiles are set on a per-issuer basis, so it's easy to have two separate ACME issuers, one issuing longer lived certs and one issuing shorter, allowing for a gradual migration to shorter certs.
[1]: https://cert-manager.io/docs/releases/ [2]: https://cert-manager.io/docs/configuration/acme/#acme-certif...
Even getting people to use certificates on IPSEC tunnels is a pain. Which reminds me, I think the smallest models of either Palo Alto or Checkpoint still have bizarre authentication failures if the certificate chain is too long, which was always weird to me because the control planes had way more memory than necessary for well over a decade.
The real key is getting ESP HW offload.
ESP is stateless if using IPv6 (no fragmentation), or even if using IPv4 (fragmented packets -> let the host handle them; PMTUD should mean no need for fragmentation the vast majority of the time). Statelessness makes HW offload easy to implement.
Using the long ago past as FUD here is not useful.
This is no criticism, I like what they do, but how am I supposed to do renewals? If something goes wrong, like the pipeline triggering certbot goes wrong, I won't have time to fix this. So I'd be at a two day renewal with a 4 day "debugging" window.
I'm certain there are some who need this, but it's not me. Also the rationale is a bit odd:
> IP address certificates must be short-lived certificates, a decision we made because IP addresses are more transient than domain names, so validating more frequently is important.
Are IP addresses more transient than a domain within a 45 day window? The static IPs you get when you rent a vps, they're not transient.
6 days actually seems like a long time for this situation!
They can be as transient as you want. For example, on AWS, you can release an elastic IP any time you want.
So imagine I reserve an elastic IP, then get a 45 day cert for it, then release it immediately. I could repeat this a bunch of times, only renting the IP for a few minutes before releasing it.
I would then have a bunch of 45 day certificates for IP addresses I don't own anymore. Those IP addresses will be assigned to other users, and you could have a cert for someone else's IP.
Of course, there isn't a trivial way to exploit this, but it could still be an issue and defeats the purpose of an IP cert.
These changes are coming from the CAB forum, which includes basically every entity that ships a popular web browser and every entity that ships certificates trusted in those browsers.
There are use cases for certificates that exist outside of that umbrella, but they are by definition niche.
In your answer (and excluding those using ACME): is this a good behavior (that should be kept) or a lame behavior (that we should aim to improve) ?
Shorter and shorter cert lifetime is a good idea because it is the only way to effectively handle a private key leak. Better idea might exist but nobody found one yet
Thing is, NOTHING, is stopping anyone from already getting short lived certs and being 'proactive' and rotating through. What it is saying is, well, we own the process so we'll make Chrome not play ball with your site anymore unless you do as we say...
The CA system has cracks, that short lived certs don't fix, so meanwhile we'll make everyone as uncomfortable as possible while we rearrange deck chairs.
awaiting downvotes in earnest.
Though if I may put on my tinfoil hat for a moment, I wonder if current algorithms for certificate signing have been broken by some government agency or hacker group and now they're able to generate valid certificates.
But I guess if that were true, then shorter cert lives wouldn't save you.
If I don't assign an EIP to my EC2 instance and shut it down, I'm nearly guaranteed to get a different IP when I start it again, even if I start it within seconds of shutdown completing.
It'd be quite a challenge to use this behavior maliciously, though. You'd have to get assigned an IP that someone else was using recently, and the person using that IP would need to have also been using TLS with either an IP address certificate or with certificate verification disabled.
I think a pattern like that is reasonable for a 6-day cert:
- renew every 2 days, and have a "4 day debugging window" - renew every 1 day, and have a "5 day debugging window"
Monitoring options: https://letsencrypt.org/docs/monitoring-options/
This makes me wonder if the scripts I published at https://heyoncall.com/blog/barebone-scripts-to-check-ssl-cer... should have the expiry thresholds defined in units of hours, instead of integer days?
Which should push you to automate the process.
Arguably setting up letsencrypt is "manual setup". What you can do is run a split-horizon DNS setup inside your LAN on an internet-routable tld, and then run a CA for internal devices. That gives all your internal hosts their own hostname.sub.domain.tld name with HTTPS.
Frankly: it's not that much more work, and it's easier than remembering IP addresses anyway.
> easier than remembering IP addresses
idk, the 192.168.0 part has been around since forever. The rest is just a matter of .12 for my laptop, .13 for the one behind the telly, .14 for the pi, etc.
Every time I try to "run a CA", I start splitting hairs.
There would be no way of determining that I can connecting to my-organisation's 10.0.0.1 and not bad-org's 10.0.0.1.
To use it, you need a valid certificate for the connection to the server which has a hostname that does get broadcast in readable form. For companies like Cloudflare, Azure, and Google, this isn't really an issue, because they can just use the name of their proxies.
For smaller sites, often not hosting more than one or two domains, there is hardly a non-distinct hostname available.
With IP certificates, the outer TLS connection can just use the IP address in its readable SNI field and encrypt the actual hostname for the real connection. You no longer need to be a third party proxying other people's content for ECH to have a useful effect.
Even if it did work, the privacy value of hiding the SNI is pretty minimal for an IP address that hosts only a couple domains, as there are plenty of databases that let you look up an IP address to determine what domain names point there - e.g. https://bgp.tools/prefix/18.220.0.0/14#dns
> In verifying the client-facing server certificate, the client MUST interpret the public name as a DNS-based reference identity [RFC6125]. Clients that incorporate DNS names and IP addresses into the same syntax (e.g. Section 7.4 of [RFC3986] and [WHATWG-IPV4]) MUST reject names that would be interpreted as IPv4 addresses.
Actually the main benefit is no dependency on DNS (booth direct and root).
IP is a simple primitive, i.e. "is it routable or not ?".
There's also this little thing called DNS over TLS and DNS over HTTPS that you might have heard of ? ;)
As a concrete example, I'll probably be able to turn off bootstrap domains for TakingNames[0].
Am I the only person that thinks this is insane? All web security is now at the whims of Google?
mTLS is probably the only sane situation where private key infrastructure shall be used
* An outer SNI name when doing ECH perhaps
* Being able to host secure http/mail/etc without being beholden to a domain registrar
E.g.:
[1] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...
[2] https://developers.cloudflare.com/1.1.1.1/encryption/dns-ove...
For local /network/ development, maybe, but you’d probably be doing awkward hairpin natting at your router.
Other than basically being a pain in the ass.
Now you will have an American entity be controlling all your assets that you want online on a very regular basis. No way than calling home regularly. Impossible to manage your own local certificate authority as sub CA without a nightmarish constant process of renewal and distribution.
For security this means that everything will be expected to have almost constant external traffic, RW servers to overwrite the certificates, keys spreaded for that...
And maybe I miss something but would IP address certificate be a nightmare in term of security?
Like when using mobile network or common networks like university networks, it might be very easy to snap certificates for ip shared by multiple unrelated entities. No?
>Successful specifications will provide some benefit to all the relevant parties because standards do not represent a zero-sum game. However, there are sometimes situations where there is a conflict between the needs of two (or more) parties.
>In these situations, when one of those parties is an "end user" of the Internet -- for example, a person using a web browser, mail client, or another agent that connects to the Internet -- the Internet Architecture Board argues that the IETF should favor their interests over those of other parties.
Incorporated entities are just secondary users.
But what risks are attached with such a short refresh?
Is there someone at the top of the certificate chain who can refuse to give out further certificates within the blink of an eye?
If yes, would this mean that within 6 days all affected certificates would expire, like a very big Denial of Service attack?
And after 6 days everybody goes back to using HTTP?
Maybe someone with more knowledge about certificate chains can explain it to me.
With a 30 day cert with renewal 10-15 days in advance that gives you breathing room
Personally I think 3 days is far too short unless you have your automation pulling from two different suppliers.
How many "top of chain" providers is letsencrypt using? Are they a single point of failure in that regard?
I'd imagine that other "top of chain" providers want money for their certificates and that they might have a manual process which is slower than letsencrypt?
If you are supposed to have an establishable identity I think there is DNSSEC back to the registrar for a name and (I'm not quite sure what?) back to the AS.for the IP.
2) App stores review apps because they want to verify functionality and compliance with rules, not just as a box-checking exercise. A code signing cert provides no assurances in that regard.
app store review isn't what I was talking about, I meant not having to verify your identity with the appstore, and use your own signing cert which can be used between platforms. Moreover, it would be less costly to develop signed windows apps. It costs several hundred dollars today.
However, the very act of trying to make this system less impractical is a concession in the war on general purpose computing. To subsidize its cost would be to voluntarily loose that non-moral line of argument.