If scalable quantum computers can be built (which seems probable, as we are progressing fast in the number of qubits we can keep together), then certain restricted types of public key encryption will be broken by quantum hardware (all currently used public key encryption actually). We know other public key encryption algorithms exist that can run on classical computers and still not be broken by quantum computers. In many ways they are less tested and less practical, so for a while NIST has been sponsoring the development of such quantum-resistant running-on-classical-computers public-key algorithms. The usual name for this is post-quantum encryption.
Are we? Every so often I check back to see how well they're doing with actually running Shor's algorithm. In 2012 they managed to find that 21 = 3 x 7. And today... 21 = 3 x 7. That doesn't sound like progressing fast.
Quantum annealing is doing well for itself, but it isn't any sort of threat to encryption.
I would not take any experiment performing Shor's algorithm today or in the past particularly seriously, since they would probably be showing something fine tuned for the particular instance of the problem (e.g. 21). We do not have anything that can be called an "error-corrected logical qubit", and we need this before we can make serious claims about running algorithms like Shor's.
There are a number of problems that are well established with strong security proofs vs classical and quantum algorithms - the problem is that the key and message sizes are fairly terrible for practice currently.
On the plus side all the current symmetric ciphers are still pretty solid against quantum machines (Grover’s search is a sqrt improvement so would simply require a doubling in key size, but I suspect in practice not necessary)
×there are still no hardware implementation of a single logic gate despite what? 40 years of research? ×stability of qubits are pathetic. ×no software stack (almost) ×Number of qubits are ridiculous and 40 years of research can't even make a consensus on how much qubits are needed for beating a CPU on at least one task. Even if quantum could(I strongly doubt that and quantum speedup has never been shown empirically) reduce complexity on some algorithms, the constant factor has no way to be handled with 50qbits, more like 1000000 qbits at minimum. ×no sign of needed breakthrough in error correcting codes.
I consider quantum speedup to be theoretically flawed, based on an interpretation of quantum mechanics needing "parallel worlds" which necessitate a misunderstanding of causality to make sense.
I consider quantum speedup to be flawed in practice too as order of magnitude of order of magnitude progress is needed and the trend is: no progress or very slow progress.
Thus quantum computing is like a vaporware, which take funds that would be far better allocated at e.g medicine, AGI, or ASICS.
> theoretically flawed, based on an interpretation of quantum mechanics needing "parallel worlds"
is patently false. There is nothing in QC that would stop working under the good ole' Copenhagen interpretation.
In fact, if there was even a theoretical possibility that a QC experiment would work only in some QM interpretations, they would no longer be interpretations, but falsifiable hypotheses.
Basically that quantum interactions are more fundamental to the universe than more terse ones, stochastic randomness or presence/absence.
If a computing device (in a universe) is built to compute using more fundamental substance (to that universe) it should stand to reason it is more efficient.
If you model a photon, does your model ever move as fast as the photon?
Even if you don’t go that extreme, there’s no indication that e.g. AES is breakable.
Though the concept seems nice, it has quite a few issues in practice. Such as distance between coupled devices, and the need to have a dedicated physical communication line.
Oh and before you think you can just chain several devices to get longer distances: this opens you up to classical MitM attacks.
My understanding is that it remains unencrypted forever. That’s why it is a public key. As long as the coins are not moved to another account that public key stays a valuable target.
Edit: As DennisP pointed out my understanding was wrong and indeed only the hash of the target is published until you make an transaction from an account.
Early Bitcoin transactions did not use addresses.
Example from block 1:
https://www.blockchain.com/btc/tx/0e3e2357e806b6cdb1f70b54c3....
You'll see at the bottom that the opcode is a PUSH / CHECKSIG rather than the later DUP / HASH160 / PUSH / EQUALVERIFY / CHECKSIG format.
So this isn't true. (blockchain.info derives an address but actually the pubkeys are right there in plain sight. Have at it!)
Most transactions are indeed made to pubkey hashes though, yes.
Depending on how easy it is, all those addresses that were abandoned containing 50 BTC are also up for grabs.
Would a coin without public addresses be better suited against such future?
The only solution would be to run a Bitcoin node on the hardware wallet itself, which is not possible right now.
So you have to treat the public key as a public key.
For Bitcoin this is much harder.
The community will just do a blockchain snapshot of balances at an agreed upon block and start a new distributed ledger with quantum resistant encryption
Snapshots have been done hundreds of times
Cringe. Quantum computers are not "able to process information at speeds exponentially faster than today's supercomputers". They're able to solve a very specific subset of problems which are hard for classical computers. A very specific subset, called BQP, mostly having to do with finding prime factors. NP hard problems are probably uncrackable with quantum computers.
My answer to the above question is no. Assuming a quantum computer has 10 states. Its running time for exponential algorithms is still exponential. A simple example: 2^10 is about 10^3. Still exponential.
Should we be recording encrypted streams and saving them for a few years until we can break them? Is there any value in that?
The public blockchains make all this more feasible since they have the community keep the data around in original state for them!
So much cheaper than recording SSL/TLS traffic, for example.
Also that is why they would find it important to exfiltrate the data from a company (or government) before it can be re-encrypted with something better.
Suppose our adversaries have a machine which can do Shor's algorithm at the scale needed to break modern public keys for say $1M and an hour, and they have been recording encrypted sessions.
For sessions encrypted using RSA key exchange, it is enough for them to spend $1M and wait one hour and then they can decrypt everything that they've recorded, using one particular key. So e.g. a typical HTTPS site only has one key for months or even years, if they've recorded the encrypted data they can read all of it for $1M.
Where Forward Secrecy (e.g. ECDHE) was used, the cost is $1M (and an hour) for each session, because the keys change each time so each fresh session needs the expensive algorithm.
They take a crypto conference happening at a prestigious institution and sprinkle in some bitcoin punditry to make them seem related.
Is this true?? I've never heard this claimed before.
One of the problems with current PKI is weakness in the face of quantum computers, leading to a new crop of algorithms being submitted to NIST, etc.
I wanted to ask whether the following simple scheme, based just on cryptographic hashes, can be used CONFIDENTLY, SECURELY and RELIABLY in many situations where Assymetric Key cryptography is used today, and in many others too, such as providing provably random polling etc. It is very similar to a One Time Pad but uses a cryptographic hash function to generate the OTP codes.
Here is the scheme: Everyone generates a random private key K[p] and store it just like any assymetric private key (encrypted with some non-stored derived key).
They use any cryptographic hash function that hasn’t had a serious preimage attack (perhaps even MD5?), hash it n (eg 10,000,000) times to get h[p][n], and publicly commit to that number. This is like a public key.
The hashes are long enough that it’s infeasible to reverse them. Key strengthening can be achieved by jumping a few onion layers between transactions.
If you start running out then you post a new public key, signed with one of your remaining onion layer codes. Any verifiers store the original public key per participant, and then can replace them with the new public key if it was properly signed by the old one, etc.
Use case: generating provably random numbers by mutually distrusting parties
Participants they gradually reveal their hashes, one onion layer per transaction. Each provably random seed is a function of the alphabetically smallest/largest three of those hashes at the next onion layer. If not all of them reveal the hashes in time, they gossip, verify and agree on which ones are the smallest/largest three before some cutoff point like “most reported that most reported”. That leaves tons of bits of entropy coming from everyone!
Use case: Authenticator Apps
The hash h[p][n+1] would be a hash of some substring of h[p][n] with enough bits that finding all chosen preimages (by an eavesdropper of the previous code) would be infeasible in advance. Perhaps 10 alphanumeric characters is enough. Also when displaying the code to enter, the authenticator app can tell the user a number from 1-100 indicating to the verifier how many onion layers to peel, making it harder to precompute the preimages. Or the user would have to enter the entire hash via the network-connected computer scanning a QR code, NFC or something. From a security standpoint, this method seems superior to the HOTP and TOTP schemes used in authenticator apps today, since there is no need to trust the verifier with any secret keys (https://www.ietf.org/rfc/rfc4226.txt) Also there is no need to sychronize clocks, since the client simply lets the server know how many times to run the hash, and increments that number every time.
Use case: Signing Payloads
Participants reveal a payload and commit to an HMAC signature by using cryptographic key at the next onion level, which at that point would be known only to them. All these signatures are collected into a blockchain block / merkle tree timestamp / similar thing, and it is sent to the participant before they reveal the onion key they used to sign it.
Use case: Off the Record Messaging
The Blockchain or Merkle tree is private between a few parties only, so once the next onion level is revealed, no participant can prove the payload was generated by a given participant, since all the onion hashes were known, any of them could generate a new valid tree with any payload history. They can only prove it to each other, or given enough “witnesses” attest to that tree, people might trust then on the basis of consensus of (presumably) mutually distrusting parties, but that’s not the same thing as cryptographic proof. But that is true of any OTR conversation.
Use case: Restoring Access
This can be used instead of Shamir Secret Key sharing. The server would have to store keys for every participant, and M of N participants would just sign that they approve authorization of some new session, some new key, or whatever. These signatures could be easily checked by anyone who has the public keys of the M participants who signed it.
Use case: Decrypting payloads
Not sure how one would do this one, to be honest. With PKI, someone could encrypt a payload that can only be decrypted by a private key holder. I see how to do signatures and HMAC, but not the other way.
They can siphon off a lot before other people catch on to people’s complaints and realize the best practices were followed
No.1 challenge is how to stop centralized exchanges from extensive market manipulation.
Decentralized crypto currencies are never going to achieve their goals, whilst players like Bitfinex and Tether are in a position to "print" unlimited amounts of money and use them to purchase coins like BTC.