The reason CAs are required to use 64-bit serial numbers is to make the content of a certificate hard to guess, which provides better protection against hash collisions. IIRC this policy was introduced when certs were still signed using MD5 hashes. (That or shortly after it was retired.) Since all publicly-trusted certs use SHA256 today, the actual security impact of this incident is practically nil.
The main practical reason seems to have been that a popular application used by Certificate Authorities, EJBCA, offered an out-of-box configuration that used 63 bits (it called this 8 bytes because positive integers need 8 whole bytes in the ASN.1 encoding used). That looks superficially fine, indeed if you issue two certs this way and they both have 8 byte serial numbers that just suggests the software randomly happened to pick a zero first bit. It's only with a pattern of dozens, hundreds, millions of certificates that it's obvious that it's only ever really 63 random bits.
But yes, I agree the sensible thing here (and several CAs had done it) was to use plenty of bits, and then not worry about it any further. EJBCA's makers say you could always have configured it to do that, but the CAs say their impression was that this was not recommended by EJBCA...
If you could go back in a time machine, probably the right fix is to have this ballot say 63 bits instead of 64. Nobody argues that it wouldn't be enough. But now 64 bits is the rule, so it's not good enough to have 63 bits, it's a Brown M&M problem. If you can't obey this silly requirement to use extra bits how can we trust you to do all the other things that we need done correctly? Or internally, if you can't make sure you obey this silly rule, how are you making sure you obey these important rules?
If memory serves it isn't a theoretical attack either, I read about it used against (Startcom maybe?) not so many years ago
It's "Rage Culture" or maybe just front-page seeking by the author. The problem with that is that it makes people desensitized because if everyone is screaming all the time, one should just shut their ears. We have real issues to discuss and this isn't one of them by a long shot.
Reducing the search space from 64bits to 63bits is of no consequence because if an attack on 63bits was feasible, it would mean the same attack would work 50% of the time on 64bit (or take twice as long for 100%). That wouldn't be acceptable at all.
Sure, 64>63, but at the very least it's not "A world of hurt"
Even though the actual security impact is nil, the current policies in place don't allow any flexibility in how non-compliant certs are treated. Therefore, millions of customers now need to replace their certificates due to a mere technicality.
The problem however as pointed out down-page [0] [1]
> If you can't obey this silly requirement to use extra bits how can we trust you to do all the other things that we need done correctly? Or internally, if you can't make sure you obey this silly rule, how are you making sure you obey these important rules?
> The reason for the urgent fixes is to promote uniformly applied rules. There are certain predefined rules that CAs need to follow, regardless of whether the individual rules help security or not. The rules say the certs that are badly formed need to be reissued in 5 days. > If these rules are not followed and no penalties are applied, then later on when other CAs make more serious mistakes they'll point to this and say "Apple and Google got to disobey the rules, so we should as well, otherwise it's favoritism to Apple and Google."
[0] https://news.ycombinator.com/item?id=19377292 [1] https://news.ycombinator.com/item?id=19375758
This specific error isn't a serious issue, as indicated by how little impact it's had on real-world security.
It's not favoritism to Apple and Google if they emit certs with 63 bits and get minor criticism and someone else, say, stops using random numbers to seed cert generation and gets raked over the coals. The latter case would require more urgent and serious attention.
And people are sticking to the letter of the rules entirely independent of the article's author. The author is not advocating for anything to be done, just reporting that this process is already in motion.
Of course we have real issues to discuss. But the fact that all these certs are going to get revoked and require replacing is a real issue that impacts people, even if there's no technical reason for it.
Okay, but, that's because 2^63 itself is more than 9 quintillion. Where the search space was previously 18 quintillion, it's now 9 quintillion. Both of those are "big". The attack is 50% easier than "theoretically impossible before certificate expiration," which should still mean that it's impossible.
If you discovered your AES key generator only created 127 bit keys, would you correct the mistake moving forward? Or go back and immediately burn everything with the old key? The difference between 2^127 and 2^128 is much, much more than 9 quintillion.
If the 64-bit random serial number has already provided an adequate security margin, it should be that no action needed for all existing 63-bit certificates. But it seems the choice of 64-bit here is arbitrary without good justification...
Either you crack it or you don't.
The crux of this entire issue is a company known as Dark Matter, which is essentially a UAE state sponsored company, potentially getting a root CA trusted by Mozilla.
It's highly suspected that Dark Matter is working on behalf of the UAE to get a root trusted certificate in order to spy on encrypted traffic at their will. Everyone involved in this decision is at least suspect of this if not actively seeking a way to thwart Dark Matter.
Mozilla threw the book at them by giving them this technical hurdle about their 63-bit generated serial numbers - which turned out to be something that a lot of other (far more reputable) vendors also happened to have this issue.
Should it get fixed? Ya, absolutely.
Is it nearly as big of a deal as giving a company like Dark Matter, who works on behalf of the UAE, the ability to decrypt HTTPS communication? Not even close - this is far more scarier, and much more of a security threat to you and me. It's pretty disappointing that this is the story that arstechnica runs with instead of the far more critical one.
The measure of what makes a trustworthy CA are things like organizational competency and technical procedures. These are things that state level actors easily succeed in. There is no real measure in place for motives and morals for state level actors. That should be the terrifying part of this story - anyone arguing about the entropy of 63 or 64 bit is simply missing the forest for the trees in this argument.
This is false. DarkMatter already operates an intermediate CA, so _if_ this were something they were actually planning to do they wouldn't need a trusted root CA to do it. So far, there's been no evidence presented that DarkMatter has abused their intermediate cert in the past, or that they plan to abuse any root cert they might be granted in the future.
Serials were originally intended for... well, for multiple purposes. But if they only function today as a random nonce, and if they're already 65 bits, then they may as well be 128 bits or larger.
A randomly generated 64-bit nonce has a 50% chance of repeating after 2^32 iterations. That can be acceptable, especially if you can rely on other certificate data (e.g. issued and expire timestamps) changing. But such expectations have a poor track record which you don't want to rely on unless your back is against the wall (e.g. as in AES GCM). Because certificates are already so large, absent some dubious backwards compatibility arguments I'm surprised they just didn't require 128-bit serials.
The attack that we're talking about here isn't breaking a signature, but relies instead on being able to manipulate certificate data to generate a certificate with a known hash. That hash must collide with another certificate hash, which would then let you generate a rogue certificate.
A team demonstrated that this attack was possible by being able to issue a rogue cert by being able to predict the not_before and not_after on the certificate that would be issued, predicting the serial of the issued cert, and finding an input for the rest of the cert fields which caused a collision.
https://www.win.tue.nl/hashclash/rogue-ca/
So, yes 128 bit serials would be better, but we should be safe even at 63 bits of entropy.
> it’s easy to think that a difference of 1 single bit would be largely inconsequential when considering numbers this big. In fact, he said, the difference between 263 and 264 is more than 9 quintillion.
Curious why everyone doesn’t agree to use 64 bits in future and just let the mis-issued certs live out their natural life?
Seems to create a lot of busywork for lots of people for no discernible benefit?
If these rules are not followed and no penalties are applied, then later on when other CAs make more serious mistakes they'll point to this and say "Apple and Google got to disobey the rules, so we should as well, otherwise it's favoritism to Apple and Google."
> 4) This only came up because of DarkMatter, a very shady operator who most people are very happy to have an excuse to screw with technicalities.
Edit maybe these are sources?
https://bugzilla.mozilla.org/show_bug.cgi?id=1531800
https://groups.google.com/forum/#!msg/mozilla.dev.security.p...
Still not getting the whole picture.
https://www.eff.org/deeplinks/2019/02/cyber-mercenary-groups... covers some background on DarkMatter.
One of the Baseline Requirements is you may not issue certs with fewer than 64 bits of entropy. Turns out DarkMatter was doing that, by issuing certs with 63 bits of entropy. Also turns out this was a thing lots of CAs did. Now that it's been pointed out publicly....
> As demonstrated in https://events.ccc.de/congress/2008/Fahrplan/attachments/125..., hash collisions can allow an attacker to forge a signature on the certificate of their choosing. The birthday paradox means that, in the absence of random bits, the security level of a hash function is half what it should be. Adding random bits to issued certificates mitigates collision attacks and means that an attacker must be capable of a much harder preimage attack. For a long time the Baseline Requirements have encouraged adding random bits to the serial number of a certificate, and it is now common practice. This ballot makes that best practice required, which will make the Web PKI much more robust against all future weaknesses in hash functions. Additionally, it replaces “entropy” with “CSPRNG” to make the requirement clearer and easier to audit, and clarifies that the serial number must be positive.
64 bits, 63 bits, what's the difference? The difference is that we now have to go through everything you might have forgotten that will make a difference. In other words, we apparently can't trust you to follow instructions, and certificates are all about trust.
The disruption caused by reissuing everything surely exceeded the disruption of this theoretical issue. I guess, on the plus side, we get to find out whether the PKI infrastructure is ready for a mass revocation/replacement event...
Recently they stopped releasing new updates for the community edition (blocker at 6.10, while the 7.0.1 is out) because they are a really greedy company.
Building by yourself is half a nightmare and the installation process as well, relying on ant tasks for it and that fail 5 out of 10 times.
Considering the UI, most of the settings can be really misused and even their evangelist can get fooled by it (especially with their Enterprise Hardware Instance, whose synchronization across the nodes is also faulty)
Now if only the same policy would be applied to CAs (possibly a few to mitigate abuse of power concerns, but far less than are in my trust store today).
On a tangent: one practice I'd genuinely like to see for security reasons (and which I'm surprised the CAs haven't proposed themselves, since it would make them twice as much money) is that major sites should always hold valid certs from two CAs, so that if a CA gets revoked it's just updating a file or even flipping a feature flag and certainly not signing up with a new CA. It would make sense to have two certs generated by different software, then. (It might also make sense, re abuse of power concerns, to present both certs and have browsers verify that a site has two valid certs from two organizationally-unrelated CAs. That way you can be significantly more confident that the certs aren't fraudulent.)
I don't think Digicert/Symantec are using it
[1] https://groups.google.com/forum/#!topic/mozilla.dev.security...
[2] https://groups.google.com/d/msg/mozilla.dev.security.policy/...
[3] https://groups.google.com/d/msg/mozilla.dev.security.policy/...
The 'pull the certificates from the browsers' thing demands people from these companies maybe recuse themselves from conversations?
(this is public trust process stuff, not technology per se)
Many of the affected CAs have already come out and "confessed" that they've issued non-compliant certs and stated that they're revoking them.
No certificates are being "pulled from browsers" as a result of this incident as far as I know.
How true is this?
I'd chalk this up to the author of the relevant module not really grokking the two's complement behavior in java.math.BigInteger.
Imagine a collision attack that takes about a 1 year with 64bit serial numbers, so with 63bit serial number it should take about half, at 6 months.
The average certificate is issued for about 1 year, so being able to mount a collision attack that took 1 year in 6 months can make the difference from generally-not-useful to very practical and dangerous.