- Deriving a key for all devices from a single key creates a single, catastrophic failure mode for the solution where all devices become vulnerable together. As soon as customers figure this out, nobody serious will adopt it because they can't afford to accept that known risk exposure.
- We're assuming that the HSM we're using doesn't have a bias in its key generation RNG to limit the real key space, because if I were an intel agency, that's probably the first lever I would pull.
- The entropy of the additional derivation components we can source from the individual device to locally diversify keys is really limited, and some really smart people are going to be reversing our code. Apple (and unrelated, in my own work, I never worked for anyone affiliated with them) relied on limiting number of attempts in hardware (effectively) to mitigate this risk.
Personally, I think the Ozzie proposal is a red herring to give the feds rhetorical leverage by providing their side with something few people understand, but can get behind politically because it's sufficiently complex as to be "our" magic vs. "their" magic. This is to drown out technical objections and make the problem a political one where they can use their leverage.
As The author (Green) notes, we can design some pretty crazy things, and if the feds came out and said, "build us a ubiquitous surveillance apparatus, or at least give us complete sovereign and executive control of all electronic information." that is technically solvable problem, but in the US, legally intractable. So instead, they want those effective powers without the overt mandate.
We can't even trust manufacturers to provide updates in most cases. Placing that much trust in them is nothing short of lunacy.
One might as well propose to have the manufacturers build in the governments public key (and autobrick phone usage) such that the phone can detect if it is really the government reading the phone.
Another note:
"Ozzie’s proposal relies fundamentally on the ability of manufacturers to secure massive amounts of extremely valuable key material against the strongest and most resourceful attackers on the planet. "
This is not true: the phone encrypts the users passcode against the manufacturers public key. If the government tries to read the phone, it will get the encrypted passcode (useless) and send it to the manufacturer who decrypts the passcode. A single private key is not massive amounts of information. Not that it changes anything about protection needs: wheither its a piece of paper containing the say 4096 bits (512 bytes), or in Matthew Greens misinterpretation billions of 512 bytes (half a terrabyte) on a single HDD, they both have the same value. The whole code base needs similar protection anyway: their bootloaders already are signed by the manufacturer.
All this centralization is bad, leave the crypto genie out of the bottle please...
If we make 2 billion phones a year (Apple itself is just over 200M) and you have a line printer running full blast (66 lines = 1page per sec) you could do Apple with one printer... and the world in 10. It would be a lot of boxes of paper though... about a box an hour.
edit: to be clear I was assuming that almost every dot in the matrix was a valid bit and there were 66 keys per page... 80 or even 132 columns at 7x5 wouldn't be enough for 4096 bits otherwise.
but as I wrote, its not necessary in Ozzie's scheme: Apple only needs to store the single private key. All the phones contain the same public key corresponding to it. All phones encrypt the user passcode to the same public key. When a user tries to unlock his own phone with his correct passcode, the phone encrypts his passcode and arrives at thee same encrypted key, unlocking the phone. When the government seizes the phone, with a special device they have the phone show the encrypted pass code, dump the encrypted GB's of encrypted phone contents, and burn an irreversible efuse in the processor disabling it. They send the encrypted passcode to Apple, who verifies its the government indeed. Apple uses its single private key to decrypt the user passcode. Apple sends this pass code to the government. The government can decrypt the image.
In the proposal there is no need for a massive database of key material. It's nonsense.
(in practice Apple would use treshold crypography, so that at least k out of n private keys each belonging to specially trained and screened employees are necessary to decrypt)
(in practice each phone has a hardcoded random nonce in efuses and instead of encrypting the user passcode, it encrypts [passcode+nonce], otherwise the government could just bruteforce 10^4 encryptions to the public key)
I am only saying that this can be done efficiently, not saying that I agree with the desirability of key escrow. This idea of key escrow is as old as cryptography.
The only significant change between plain key escrow and Clear (bricking the phone) would defeat the usefulness of Clear.
That's only in bullet point one and where it already falls apart.
Also, this key escrow scheme is near impossible to scale to more than one government. Now we need a way to authenticate government agents, good luck with that.
The very idea of "checks and balances" is that different organizations would strive for power on opposite directions, thus preventing each other from gaining much.
I mean, clearly there’s a difference between “blows up” and “destroys all the key material”, but clearly Apple can point to prior art here.
https://github.com/rayozzie/clear/blob/master/clear-rozzie.p...
What’s Ozzie’s true motivation? Is he looking to start a company running Clear and raking in patent revenue? I get why the governments want this, but not why a citizen would propose this.
If it weren’t Ray Ozzie, I would think this was just part of some propaganda push.
The benefit is that law enforcement has access to relevant information. Society has a vested interest in this provided it doesn't infringe any other rights. It's why warrants exist. If you have the ability to respect a warrant without hurting your customers, it should be illegal not to do so.
Obviously there are significant technical issues, which is why this is contentious; those are outside the scope of this comment.
But even me, even with that bias, I still worry quite often about what evil can lurk behind cryptographic structures, and what effect wide availability of strong crypto will have on that.
I don’t know that it will be positive or negative... my gut says positive, but I worry. And so it’s not crazy to me to think Ozzie might be legitimately worried about people’s safety.
Because most entrepreneurs aren't running businesses primarily for the money.
In Ozzi’s proposal, the private key never actually has to exist outside the environment it was created in, only the public key does. As pointed out in other comments, LE would not need access to the private key, either, they could simply submit the encrypted passcode to the manufacturer, who would then decrypt it on their behalf using the private key.
And how do we determine when that's actually the case and when it's overhyped or flawed intelligence?
> We don't live in a perfect world and we don't have a perfect solution.
Exactly, so focusing on phone encryption is probably a waste of time.
But phones are online devices. why does the escrow key have to be a constant, which if the central store is compromised means all phones prior to that date are compromised forever?
eg, re=spin the per-phone keygen on some cycle, and you define a window of risk, but it passes. re-spin clearly has to pass through some protocol, but we've been doing ephemeral re-key forever with websites.
It’s not like this would be Fort Knox. All that data could be stored on a couple of USB sticks which, really, makes it even scarier. Someone could hold the entire contents in the palm of their hand walk away with everything.
> If ever a single attacker gains access to that vault and is able to extract, at most, a few gigabytes of data (around the size of an iTunes movie), then the attackers will gain unencrypted access to every device in the world. Even better: if the attackers can do this surreptitiously, you’ll never know they did it.
One answer might be that we deserve such an outcome, and there is no reason to insulate encryption from the negative consequences. But is that a good answer?
no thanks
This is the most profound part of Matthew Green's piece in my opinion:
"While this mainly concludes my notes about on Ozzie’s proposal, I want to conclude this post with a side note, a response to something I routinely hear from folks in the law enforcement community. This is the criticism that cryptographers are a bunch of naysayers who aren’t trying to solve “one of the most fundamental problems of our time”, and are instead just rejecting the problem with lazy claims that it “can’t work”. "
I believe the most fundamental problem is how can we decentralize real world security? I am FOR mass surveillance but AGAINST centralized mass surveillance.
Assume every crook and cranny of the world was covered by community cameras, and the cameras encrypted the streams with treshold cryptography, such that the populace has different parts of the secret, then one needs "enough" citizens agreeing to reveal the contents seen by a specific camera at a specific time. This way its public for all or public for none. Every accident, every murder, ...
Suppose a body is found, then the group decides to reveal the imagery: oh yes, in this case the person was murdered! look the perpetrator is walking out of view to the next camera, then the next,... we can trace him to where he is now. Properly trained citizens (in a now authorized police role) go and arrest the guy. He is now in prison waiting for his trial (also with community cameras, so no broomsticks in prisoner ani). At trial time, if the person denies, or claims to be a different person from the arrested one, we can trace through all the imagery from his commiting a crime to his sitting in court right there and then.
So yes, there is a real conflict between cryptographers and centralized law enforcement. We dont need no spooks!
And the spooks can not decode the camera imagery: a large enough number of citizens (chosen at random by cryptographic sortition) running instance of good citizen client software need to release their part of the shared secret.
EDIT:
So there is broadly speaking 2 kinds of crimes:
* meatspace crimes (murder, negligence, rape, making childporn (automatically rape), ...)
* cyber crimes (copyright, child porn, ...)
I argue that not implementing such a community camera system is a form of negligence in itself.
It does not adress things like copyright infringement, but ... thats not exactly the most popularly supported concept.
Then there is the problem of child porn: fake and real.
I argue that with deepfake any faked child porn will eventually become indiscernible from real child porn.
Which leaves the problem of official child porn recorded by the community cameras used to apprehend perpetrators (since these also sign the imagery to testify authenticity!).
Due too taboo many victims of child abuse didn't realize, or only had doubts that they were suffering abuse, enabling the abuse to continue. Without concrete visual examples for them to explore, to asses if they are or are not suffering child abuse, how can they alert others of their situation? We send these children extremely mixed messages: absolutely tell us if you are being abused, but absolutely never falsely report a person. Merely asking someone else for advice is automatically interpreted as a child reporting child abuse. How can a child asses his or her situation? With abstract questions using words and connotations it does not know?
I believe the number of reported child abuses would go up if we used these community cameras for decentralized mass surveillance.
Also for crime in general (theft, murder, ...), the knowledge that you will with extremely high probability be caught, will decrease a lot of crime. I would not be surprised if the crime rate of "impulsive" crimes (where the criminal was supposedly not able to control his urges) would drop substantially, revealing that in the current system they often get off the hook.
There will still be rude people, getting fines for squeezing women in the ass while drunk. But for any actual crime in general, both victim and perpetrator would know that the victim can simply report this to the group, and that the perpetrator can not escape by lack of evidence. The current lack of evidence constantly discourages people from reporting crimes (as there is risk involved: financial: lawyers, emotional: potential incredulity at police station, ...).
One might think that this will cause criminals to escalate to murder: "if you rob a victim, you should kill her, or else she will report you" but hiding a body will be very hard, and if a person goes missing the friends and relatives will report this, and instead of following the criminal we can follow the missing person from the time and place she was last reported seen!
As long as cryptographers only draw the privacy card, the law enforcement community has a point. As long as the law enforcement community only draws the centralized power card, the cryptographers have a point.
Only when we have decentralized mass surveillance can we have both privacy (as long as you don't commit crimes or go missing) and real law enforcement.
Common FAQ:
What if say a stalker repeatedly reports his ex as "missing"? Cry wolf to many times, or be blocked to report a person missing, and the good citizen client software that the citizens individually run, will refuse to comply.
What if a stalker or group of them repeatedly reports a "murderer" in a celebrities bedroom? we can send a local but randomly selected properly trained (group of) citizen (in police role) to go check the room, if the supposed dead body is not there, no reason to unlock the imagery.
(I will add more as people ask)
But the cameras are supposed to completely cover society, so we don't need the cyber info. Indeed, perhaps the perpetrator has a secret paper diary, written in code, where he writes down his exploits. Who cares? We have signed imagery, of him commiting the crime. Any extra information is useful in the statistical sense (to understand what drives a person to do this or that, or to better prepare citizens on how to prevent falling victim to such and such crime), but should be unnecessary to convict a person. The most relevant are the actions themselves I think.
About location history: the camera system is more reliable than the cell phones since a cell phone may be given to a friend willing to provide an alibi, alternatively GPS spoofing etc.
The major reason these cell phone messages, search history etc are highly relevant is simply because we lack the community camera system.
Another problem is phone evidence is highly irregular: some people are more aware of mass surveillance then others (which is also highly correlated to status in society!) when communicating, some people refuse to have a cell phone on them, ...
When they lack enough evidence, the prosecution is forced to grab at straws (irrespective of guilt or innocence of the defendant), and then the value of computer/phone activity seems very high, especially if boots on the ground or scientifiic investigation of crime scenes is so much more expensive. Then it is easy to view this digital data as highly relevant and reliable.
I find that a disturbing classification.
Suppose we talk about vehicles, and I could classify them by color (red, green, ...), or by type (cars, planes, ...)
What is a good or bad classification?
What is disturbing you? the mere topic?
We can't improve prevention of a problem without talking about the problem.
EDIT:
Note, I have added making child porn to meat space crime (even though that is obvious) specifically for you
Would you consider the "Napalm Girl" [0] to be child porn? Or evidence of the atrocities of the use of napalm in the Vietnam war? Did it eventually contribute to the end of public support for the war?
"Total Surveillance is the Perfection of Democracy"
For once I disagree with RMS, re: https://www.gnu.org/philosophy/surveillance-vs-democracy.htm...
I believe that it is fundamentally not possible to "roll back" the degree of surveillance in our [global] society in an effective way. Our technology is already converging to a near-total degree of surveillance all on its own. The article itself gives many examples. The end limit will be Vinge's "locator dust" or perhaps something even more ubiquitous and ephemeral. RMS advocates several "band-aid" fixes but seems to miss the logical structure of the paradox of inescapable total surveillance.
Let me attempt to illustrate this paradox. Take this quote from the article:
"If whistleblowers don't dare reveal crimes and lies, we lose the last shred of effective control over our government and institutions."
(First of all we should reject the underlying premise that "our government and institutions" are only held in check by the fear of the discovery of their "crimes and lies". We can, and should, and must, hold ourselves and our government to a standard of not committing crimes, not telling lies. It is this Procrustean bed of good character that our technology is binding us to, not some dystopian nightmare.)Certainly the criminally-minded who have inveigled their way into the halls of power should not be permitted to sleep peacefully at night, without concern for discovery. But why assume that ubiquitous surveillance would not touch them? Why would the sensor/processor nets and deep analysis not be useful, and used, for detecting and combating treachery? What "crimes and lies" would be revealed by a whistleblower that would not show up on the intel-feeds?
Or this quote:
"Everyone must be free to post photos and video recordings occasionally, but the systematic accumulation of such data on the Internet must be limited."
How will this limiting be done? What authority will decide who gets to collect (archive!) what and when? And won't this authority need to see the actions of the accumulators to be able to decide whether they are following the rules?In effect, doesn't this idea imply some sort of ubiquitous surveillance system to ensure that people are obeying the rules for preventing a ubiquitous surveillance system?
Let's say we set up some rules like the ones RMS is advocating, how do we determine that everyone is following those rules? After all, there is a very good incentive for trying to get a privileged position vis-a-vis these rules. Whoever has the inside edge, whether official spooks, enemy agents, or just criminals, gains an enormous competitive advantage over everyone else.
Someone is going to have that edge, because it's a technological thing, you can't make it go away simply because you don't like it. If the "good guys" tie their own hands (by handicapping their surveillance networks) then we are just handing control to the people who are willing to do what it takes to take it.
You can't unilaterally declare that we (all humanity) will use the kid-friendly "lite" version of the surveillance network because we cannot be sure that everyone is playing by those rules unless we have a "full" version of the surveillance network to check up on everybody!
We can't (I believe) prevent total surveillance but we can certainly control how the data are used, and we can certainly set up systems that allow the data to be used without being abused. The system must be recursive. Whatever form the system takes, it shall necessarily have to be able to detect and correct its own self-abuses.
Total surveillance is the perfection of democracy, not its antithesis.
The true horror of technological omniscience is that it shall force us for once to live according to our own rules. For the first time in history we shall have to do without hypocrisy and privilege. The new equilibrium will not involve tilting at the windmills of ubiquitous sensors and processing power but rather learning what explicit rules we can actually live by, finding, in effect, the real shape of human society.
Just posting to say I have read your comment, and will most certainly edit this comment to reply tomorrow!
I will probably also want to be able to contact you (by some method acceptable for us both, email? IRC?) if I ever rewrite this in a more accessible format, or perhaps to collaborate on this subject?
A "leak" here happens when a trusted entity loses control of the secret to one or more untrusted and malicious entities. That's just a definition, not a claim that any particular government, company, or person is a trusted entity.
To counter this, we need multiple layers of defense.
One is the business of bricking the phones when the leaked secrets are exploited. That makes it plain that the secret has leaked. It's a valuable layer of defense.
Another is to make the secrets have limited useful lifetimes. Expiration and revocation for TLS certificates is a way to do that. Credit/debit card numbers can be deactivated and replaced rapidly. That's another way to limit the lifetime of a secret. Ozzie's proposal does not include a way to limit secrets' lifetimes. (Social Security numbers are problematic secrets: they too have unlimited lifetimes.)
A third layer is making the secrets have limited utility. If debit cards had daily spending limits, their secret numbers would be less useful than they are today, for example. Day-one exploits are secrets with vast utility, for another example. Ozzie proposes a secret to unlock an entire phone. How about limiting that to, say, the phone's call log or SMS log?
A fourth layer is to keep the caches of secrets as small as possible, so a breach affects as few people as possible. Ozzie proposes the opposite of this.
A fifth layer: holders of caches of secrets must know they are strictly liable for breaches proportional to the damage they do. It must not matter whether the breach was due to negligence, carelessness, espionage, or salt water rusting out the safe after a storm. Large scale key escrow cache systems will never be able to meet this standard: nation states won't honor that liability, nor will they pay private companies enough to cover the insurance for it.
(Strict liability is not unprecedented: workers' compensation and the vaccine injury victims' compensation fund are two reasonably successful examples.)
People, companies, and governments holding secrets necessarily must consider what happens when (not if) they leak, and provide at least some defenses in depth like these.
Ozzie's proposal has weak and incomplete in-depth defenses. That's why it's dangerous.
* A court order is required. It's not up to the tech vendor.
* Physical control of the device is required. No remote exploits.
* Access is enabled only to one device at a time. No mass hacking.
The point of security is to increase the cost to the 'attacker' (here we'll use that word even for legitimate government purposes); there's no perfect security; law enforcement can access data on iPhones already. Also, attackers focus on the weakest (i.e., least expensive) link and there's limited value in increasing the cost beyond the 2nd weakest link.[1] Except for the centralization of key storage and two other issues (see below), Ozzie's proposal might increase the cost to the level of law enforcement's alternative, acquiring a hacking tool. In fact, I've been thinking of something similar (court order, physical access required, notification to user) and might even have posted it to HN at some point.
Using hacking tools is much worse than Ozzie's process: There's no court (or at least it's not as enforceable, because there's no tech company checking for a warrant), no tech company, the user doesn't necessarily know their data has been accessed, remote exploits are possible, and so is mass hacking.
Also remember that private citizens can still encrypt their data at the file level using other tools, though of course most will not.
Here are weaknesses I see:
A) The use of other means of accessing devices would have to be outlawed, or law enforcement will continue to use hacking tools and citizens gain nothing.
B) Solve the centralization problem. Probably, the keys shouldn't be in the hands of the tech giants and should be distributed widely. EDIT: Perhaps require two unrelated parties for access?
C) If these new access tools are built into mobile devices, what happens in countries where people's rights have been taken away? The courts are often ineffective. I suppose the fact that the phones get bricked at least informs the user, and the authorities can use hacking tools anyway, so perhaps nothing is lost.
____________
[0] https://www.wired.com/story/crypto-war-clear-encryption/
[1] If I increase the cost of exploit A to $100,000 and exploit B costs $50,000, attackers will use B. If I increase the cost of A even further, to $200,000, it won't provide much more security - the attackers still will use B.