I also know that Cloudflare is now trying to adopt these PQC protocols[0][1], so I checked the Cloudflare blog post after seeing this attack. Then, I found out a blog mentioning this attack[2], lol.
[0] https://blog.cloudflare.com/making-protocols-post-quantum/
[1] https://blog.cloudflare.com/post-quantum-key-encapsulation/
The signature schemes seem dodgy to me except for Sphincs and it’s variants and those have big keys and signatures. The keys are not impractically big for many uses but would be tough for things like block chains.
Paper Abstract:
This work introduces new key recovery attacks against the Rainbow signature scheme, which is one of the three finalist signature schemes still in the NIST Post-Quantum Cryptography standardization project. The new attacks outperform previously known attacks for all the parameter sets submitted to NIST and make a key-recovery practical for the SL 1 parameters. Concretely, given a Rainbow public key for the SL 1 parameters of the second-round submission, our attack returns the corresponding secret key after on average 53 hours (one weekend) of computation time on a standard laptop.
Quantum computers have special properties that make them capable of breaking commonly used encryption schemes. We're dependent on those schemes for secure communication, like logging into a bank website. Due to this, the organization NIST has been working on finding encryption schemes that would be diffcult to break with a quantum computer.
NIST was at the stage of this project where they felt reasonably confident that three specific encryption schemes met the requirements. This is after review of many candidates. Rainbow made it to the top three. Ward Buellens essentially managed to break Rainbow, thereby making it ineligible for use in a quantum computer-powered future.
I assume this puts the two remaining candidates' eligibility into question. Were the requirements lacking, or is the project inherently at risk of failure due to the nature of quantum computing?
Generally it seems the encryption side of post quantum is a bit easier than the signature side. All signature schemes proposed have significant downsides.
(though this result raises very serious questions about the whole process - if it's possible that a promising candidate which probably would've been standardized very soon can be broken so severely it questions whether we know enough about these technologies to standardize them yet)
The reason they do well in this area is that you can implement a Fourier transform with exponentially fewer quantum logic gates than classical logic gates.
Post quantum involves implementing a cryptosystem which can not be reduced to a hidden subgroup problem, but I’m still not sure if this is sufficient (QIP might solve other classes of problems easily)
You already and automatically know it's not that, because the article was not about breaking the encryption with a quantum computer.
It was broken with trivial hardware and in trivial time.
They’ve increased the parameter sets to compensate, implying that the maths I don’t understand doesn’t create a systemic failure but rather a sufficiently meaningful reduction in attack complexity
[0] https://groups.google.com/a/list.nist.gov/g/pqc-forum/c/KFgw...
You can't expect a government department to provide robust security to the masses when the rest of the government is trying the prevent that exact situation.
At this point anything, cryptography related, coming from NIST should be considered compromised.
But don't over-correct. You can't just call anyone who submits to NIST an NSA puppet. There is zero parrallel with the Dual EC backdoor.
Cryptography is hard - there have been numerous NTRU optimizations that withstood years of analysis before someone worked how to break them. Not everything is an NSA conspiracy. The dual-EC bullshit was even confusing to other cryptographers at the time, but at that time good faith was still being assumed.
The attack the NSA used on the standardization process can’t be repeated in anyway now, because no protocols are accepted that don’t demonstrate how the various constants are determined. Of course there’s also much less trust in US gov, and more importantly us gov adjacent cryptographers.
It can be either, quite frankly.
There are also systems like learning with errors. shortest vector, ... but I don't understand them well enough to know if they've been proven safe at a basic technique level.
The problem is that there have been many attempts to reduce the actual key size, and they keep being found to have ended up breaking the security of the underlying scheme.
I feel like that's what has happened here with rainbow.
(as a note to the "NSA conspiracy" folk: The NSA or what have you wants schemes that they can break by knowing some secret value. Schemes that simply break outright aren't useful to them because it means (1) anyone can break it, and (2) as a byproduct of (1) they cannot use it safely. In an ideal world what they want is something so secure that they could use it for communication themselves - which would reduce suspicion - but also be able to decrypt everything)