To me, the more profound consideration is this: if you use a strong alphanumeric password to unlock your phone, there is nothing Apple has been able to do for many years to unlock your phone. The AES-XTS key that protects data on the device is derived from your passcode, via PBKDF2. These devices were already fenced off from the DOJ, as long as their operators were savvy about opsec.
iCloud backups can be secured so not even Apple can get in them, but it is fundamentally much harder to secure (can't be hareware-entangled and still restore to a new device), and it would significantly complicate iCloud password changes. I'm sure they are working on it, but it is nontrivial.
That (software) problem is the real reason 99% of users are still exposed, as you say the hardware and secure enclave holes are basically closed.
There is no way they are working on this. It is an intentional design decision that Apple offers an alternative way to recover your data if you lose your password.
Or if you die without telling your next-of-kin your password. Most people do not actually want all of their family photos to self-destruct when they die because they didn't plan for their death "correctly". That would be a further tragedy for the family. (Most people don't even write wills and a court has to figure things out.)
Making data self-destruct upon forgetting a password (or dying) is not a good default. It's definitely something people should be able to opt-in to in particular situations, but only when they understand the consequences. So it's great news that in iOS 9.3 the Notes app will let you encrypt specific notes with a key that only you know. But it's opt-in, not the default.
http://6abc.com/news/senior-official-stresses-feds-need-to-u...
Look at the controversy over the phone not booting with third-party fingerprint reader repairs as an example. People were upset when they found out that having their device worked on could make it unbootable, but Apple was able to easily fix it with a software update. If it had been designed more securely, it might have wiped data when it detected unauthorized modifications, which would have meant even more upset people. Now that this has become a public debate, there will be a very different response to making it more secure.
Making the DFU update path more complex increases the risk of bugs and thus the risk of permanently bricking phones.
You could imagine an alternative where on boot the Secure Enclave runs some code from ROM which checks that a hash of the SE firmware matches a previously signed hash, which is only updated by the Secure Enclave if the user entered their pin during the update. If it doesn't match, either wipe the device or don't boot until the previous firmware is restored.
This way Secure Enclave firmware updates and updates via DFU are still possible, but not together without wiping the device.
That basically happened (at a smaller scale) just last week. When Apple apologized and fixed the "can't use iPhone if it's been repaired by a 3rd party" thing, the fix required updating phones which were otherwise bricked. It's not an unreasonable scenario.
That probably also means removing most debugging connections from the physical chip, and making extra sure you can't modify secure enclave memory even if you desolder the phone.
You decap the chip to expose the die with HF, and then use Focused Ion Beams and a million dollar microscope setup, you can rearrange the circuits. So, if the NSA absolutely had to have the data on the chip they could modify it to make it sing. So, if say they know an iPhone had the location of Bin Laden on it, they could get the goods without Apple.
Locking themselves out of the Secure Enclave isn't anywhere near sufficient. As long as the device software and trust mechanisms are totally opaque and centrally controlled by Apple, the whole thing is just a facade. There's almost nothing Apple can't push to the phone, and the audibility of the device is steadily trending towards "none at all".
If the NSA pulls a Room 641A, we'd never know. If Apple management turns evil, again, we'll never know. If a foreign state use some crazy tempest attack to acquire Apple's signing keys ... again, we'll never know.
I don't think acting like an open ecosystem is the be-all and end-all of security is productive. Most organizations (let alone individuals) don't have the resources to vet every line in every piece of software they run. Software follows economies of scale, and for hard problems (IE, TLS, font rendering, etc) will only have one or two major offerings. How hard would it be to introduce another heartbleed into one of those?
The important thing about the secure enclave thing is that it pushes security over the line so that the attacker has to comprimise you befor you do whatever it is that will get you on somebodys shitlist.
Is this true even if you use Touch ID?
The only point I'm making is that Apple already designed a cryptosystem that resists court-ordered coercion: as long as your passcode is strong (and Apple has allowed it to be strong for a long time), the phone is prohibitively difficult to unlock even if Apple cuts a special release of the phone software.
Copying a good fingerprint from a dead finger or a randomly placed print is not easy [2]. It's hard, doable but you get 5 tries so if you screw up, you have thrown away all the hard work of the print transfer.
All bets are off if the iPhone is power-cycled. Best bet if you're pulled over by authorities or at a security checkpoint is to turn off your iPhone (and have a strong alphanumeric passcode).
[1] https://xkcd.com/538/ [2] https://blog.lookout.com/blog/2013/09/23/why-i-hacked-apples...
Also remember that rubber-hose cryptanalysis is always an option.
It was near-impenetrable, but it could have been inevitable if it weren't for the fact that Apple could push OS updates without user consent. They could have made it impossible for anyone to get in even if your pin was 1234, but didn't.
Kind of disappointing given their whole thing about the Secure Enclave. Bunch of big walls in the castle, but they left the servant's door unlocked.
The main difference would be that everyone knows trust zone through Qualcom's implementation and software - as it's been broken many times. At the end of the day "its just software" though, which runs on a CPU-managed hypervisor with strong separation ("hardware" but really, the line is quite a blur at this level).
What that means is that you need to be unable to update the secure enclave without user's code (so the enclave itself needs to check that) which is probably EXACTLY what apple is going to do.
Of course, Apple can still update the OS to trick the user into inserting the code elsewhere, then FBI to use that to update the enclave and decrypt - though that means the user needs to be alive obviously.
Past that, you'd need to extract the data from memory (actually opening the phone) and attempt to brute force the encryption. FBI does not know how to do this part, the NSA certainly does, arguably, Apple might since they're designing the chipset itself.
- Apple is required to have backdoors, at least on iPhones sold in foreign countries, isn't it?
- Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.
- Even if iCloud is supposed to be encrypted, they didn't open up that part to public scutinity.
- Therefore a perfect security around the SE only solves the problem of accessing a phone that wasn't backdoored yet. There are all reasons for, say, Europe and CIA, to require phones to be backdoored by default for LE and economic intelligence purposes.
But in both those situations the weakness is in the person, not the device. Apple devices still potentially have security weaknesses which the FBI is asking Apple to exploit for them. Apple wants to fix these weaknesses, to stop Apple being forced to exploit them.
I don't believe this is the case.
Even if the SE were completely secure, a rogue update of iOS could intercept the fingerprint or passcode whenever it is typed, and replay it to unlock the SE when spies ask for it. As far as I know, the on-screen keyboard is controlled by software which isn't in the SE.
What you say about an on-screen passcode is likely true but the architecture of the secure enclave is such that the touch ID sensor is communicating over an encrypted serial bus directly with the SE and not iOS itself. It assumes that the iOS image is not trustworthy.
From the white paper [1]:
It provides all cryptographic operations for Data Protection key management and maintains the integrity of Data Protection even if the kernel has been compromised.
...
The Secure Enclave is responsible for processing fingerprint data from the Touch ID sensor, determining if there is a match against registered fingerprints, and then enabling access or purchases on behalf of the user. Communication between the processor and the Touch ID sensor takes place over a serial peripheral interface bus. The processor forwards the data to the Secure Enclave but cannot read it. It’s encrypted and authenticated with a session key that is negotiated using the device’s shared key that is provisioned for the Touch ID sensor and the Secure Enclave. The session key exchange uses AES key wrapping with both sides providing a random key that establishes the session key and uses AES-CCM transport encryption.
[1]: https://www.apple.com/business/docs/iOS_Security_Guide.pdf
I hate to be that guy, but if you have an op and you have any opsec, you aren't even carrying a phone.
Right ?
I mean, we're talking about threat models where chip-level doping has been shown as an attack. This just seems to be a variation on the same claims of copy protection tamper resistant dongles we've had forever. That someone builds a secure system that is premised on a secret being held in a tiny tamper-resistant piece, only the tamper resistance is eventually cracked.
It might even be the case that you don't even need to exfiltrate the UID from the Enclave, what the FBI needs to do is test a large number of PIN codes without triggering the backoff timer or wipe. But the wipe mechanism and backoff timer runs in the application processor, not on the enclave, and so it is succeptable to cracking attacks the same way much copy protection techniques are.
You may not need to crack the OS, or even upload a new firmware. You just need to disable the mechanism that wipes the device and delays how many wrong tries you get. So for example, if you can manage to corrupt, or patch the part of the system that does that, then you can try thousands of PINs without worrying about triggering the timer or wipe, and without needing to upload a whole new firmware.
I used to crack disk protection on the Commodore 64 and no matter how sophisticated the mechanism all I really needed to do was figure out one memory location to insert a NOP into, or change a BNE/BEQ branch destination, and I was done. Cracking often came down to mutating 1 or 2 bytes in the whole system.
(BTW, why the downvote? If you think I'm wrong, post a rebuttal)
* Decapping and feature extraction even from simpler devices is error prone; you can destroy the device in the process. You only get one bite at the apple; you can't "image" the hardware and restore it later. Since the government is always targeting one specific phone, this is a real problem.
* There's no one byte you can write to bypass all the security on an iPhone, because (barring some unknown remanence effect) the protections come from crypto keys that are derived from user input.
* The phone is already using a serious KDF to derive keys, so given a strong passphrase, even if you extract the hardware key that's mixed in with passphrase, recovering the data protection key might still be difficult.
Any mechanism that prevents the application processor from either a) remembering it incremented the count b) corrupts the count or c) patches the logic that handles a retry count of 10, is sufficient to attack the phone.
Somewhere in the application processor, code like this is running:
if (numTries >= MAX_RETRY_ATTEMPTS) { wipe(); }
or
if (numTries >= MAX_RETRY_ATTEMPTS) { retryTime = retryTime * 2; }
Now there are two possibilities. Either there are redundant checks, or there aren't. If there aren't redundant checks, all you need to do is corrupt this code path or memory in a way that prevents it's execution, even if it is to crash the phone and trigger a reboot. Even with 5 minutes between crash reboot cycles, they could try all 10,000 pins in 34 days.
But you could also use more sophisticated attacks if you know where in RAM this state is stored. You couldn't need to de-capp the chip, you could just use local methods to flip the bits. The iPhone doesn't use ECC RAM, so there are a number of techniques you could use.
I disagree. The pin validation is done within the secure enclave. You can't do it outside the secure enclave because the pin is combined with a secret that is burned into the silicon of it. The secure enclave can and will enforce timeouts for repeated failures, as well as refuse to process any pin entries after too many attempts. Disabling the wipes or bypassing the timer won't do you any good when you only have a few attempts.
https://blog.trailofbits.com/2016/02/17/apple-can-comply-wit...
Look, there's a big difference between trusting known ciphers that have been well studied by the world's top cryptographers, and a proprietary TPM chip that relies on security-through-obscurity.
The history of embedding secrets into black boxes is a history of them being broken. This isn't a theoretical concern, it's a very practical one.
Don't discuss your (or others') votes.
Don't interrupt the discussion to meta-discuss the scoring system.
Sure, to resist microscopic attacks, an IC must assert logical integrity to itself i.e. that the gates & wires are not compromised by a microscopic attack.
But just because you and I haven't imagined it, doesn't mean some kind of internal canary can't exist. Your naive code (below) of a counter might instead be based on quantum cryptography, or on intrinsic properties of a function or algorithm which if compromised the SE cannot function at all.
The existence of one-time password schemes like S/KEY gives me hope, since it is a sequence generator that simply doesn't function without input of the correct next value (technically the previous value from the hash function). S/KEY itself is not the answer (wrong UX and no intrinsically escalating timer), but I wanted to illustrate that you can generate a self-validating sequence without tracking integer position.
Apple apparently has a motive and the warchest for the R&D. If they're hiring cryptographers (has anyone checked?), they're acting on it.
The cynical side of me says that Apple's marketing tactics have worked. But I've got a feeling, heck, I want to believe, that this is actually driven by company values and not a short-term marketing benefit.
Of Palladium, Bruce Scheier said:
> "There's a lot of good stuff in Pd, and a lot I like about it. There's also a lot I don't like, and am scared of. My fear is that Pd will lead us down a road where our computers are no longer our computers, but are instead owned by a variety of factions and companies all looking for a piece of our wallet. To the extent that Pd facilitates that reality, it's bad for society. I don't mind companies selling, renting, or licensing things to me, but the loss of the power, reach, and flexibility of the computer is too great a price to pay."
I think his fears have come true to some extent in iOS, but knowing what we know now about government surveillance of everybody, it may no longer seem like too great a price to pay. That is, if you trust the vendor. Apple seems to be worthy of that trust. But Microsoft...?
Edit: formatting
We're already paying that price, essentially. An iPhone won't run arbitrary code, a replacement OS, or accept code from arbitrary sources. It's already an exclusively vendor-curated platform. If you're already going to buy into that model, I don't see the point in not going for the greatest amount of protection that you can get. (OK, yes, a dev can compile their own code and push it to their own device. I'm actually not sure why I don't hear about this happening more often as a way to run "unacceptable" programs on iOS devices).
Oh no... it's working...
Edit: a Nexus device bought directly from Google with the right hw may address both points.
I'll repost a snippet from a post by merhdada that hints at the root of one of the problems with android security:
"This can happen only because of a design flaw in the security architecture of Android (L). Unlike iOS and like traditional PCs, the disk encryption key is always in memory when the device is booted and nothing is really protected if you get a device in that state. It's an all-or-nothing proposition."
Please read the entire thread, and check the links referenced in that thread, for information on how issues like these are mitigated.
That's only one issue though. There are a few more.
But none of that even matters a lot of times ... you really won't need to hack an android phone... because the data is also on corporate servers. So the FBI could get at it in any case most of the time.
Is anyone aware of anything that makes this more than a leap of faith?
Do i really need a quad core smartphone with a dedicated GPU 3GB of ram a higher pixel density than i can possibly distinguish, etc etc?
Why would i settle for shitty crypto just because the information isnt a state secret?
In both cases, when was the last time you drove it at its maximum speed all the time? Or ensured that you were using maximum torque at all times and always sitting in the maximum power band for the engine?
If you find that you haven't done these things, you probably should ask yourself why you have a car, right? After all, you're never going to drive the full speed of the car, so why have the car in the first place?
Anyway, a rational politician would have a tremendous uphill battle against both Pride and Ignorance. He or she would have to have tremendous skill as a teacher and a leader, not to mention the emotional fortitude of a Buddha to endure the onslaught of hatred.
Sanders has expressly argued that climate change is a bigger national security threat than terrorism (or anything else) -- and did so in one the Democratic debates, in response to a question on national security threats. While that may not be directly minimizing terrorism, it certainly is explicitly placing it behind other problems in terms of need for focus.
> (And in another twist of irony I am positive that the American Revolutionaries were called terrorists by the British.)
They absolutely were not; the term "terrorists" was first applied to the leaders of the regime of the Reign of Terror in the French Revolution (shortly after the American Revolution), and it was quite a long time after that before the term was applied to actors other than state leaders applying terror as a weapon to control their subject population.
out of curiosity, what evidence is there that there isn't?
perhaps i should ask what you mean by "a terrorism problem" as well.
The reason iCloud data will always be accessible by Apple, and thus governments, is not because Apple wants to make it accessible to governments. It's so that Apple can offer customers the very important feature of accessing their own data if they forget or otherwise don't have the password. That is an essential feature, and why this aspect will never change.
When someone passes away, for example, it would be a terrible compounding tragedy if all their photos from their whole life passed away along with them, because they didn't tell anyone their password or where they kept the backup key. So Apple wants and needs to provide an alternative way to recover the account. (For example, they will provide access to a deceased person's account if their spouse can obtain a court order proving the death and relationship.)
Harvard recent published a paper (called "Don't Panic") that essentially states the same.[1] Governments shouldn't "panic" because in most cases, consumers will not be exclusively using unbreakable encryption, because it has tradeoffs that aren't always desirable.
And the reason why most consumer should be backing up to iCloud is similar: that's how you prevent the tragedy of losing your data if you lose your phone.
Just something to keep in mind when discussing the "going dark" and "unhackable" news items.
It is worth noting however that people who do "have something to hide" from governments probably won't be using iCloud, if they know what they're doing. Then again if they know what they're doing, they wouldn't use anything that is backdoored anyway. So the naive criminals will still probably be hackable, and that's about all we can hope for.
[1] https://cyber.law.harvard.edu/pubrelease/dont-panic/Dont_Pan...
I have absolutely nothing to hide. I have simply always treated my privacy as something that was valuable in it of itself. Perhaps even more valuable than the photos I clearly opted to not share with others, to go off your example.
I also don't understand why it's so absurd for some people to conceptualize non-malicious things you wouldn't wasn't to share with anyone but yourself. Hell, I have tons of notes and things I write to myself that I definitely do not want to be seen by anyone. They simply weren't written with the intention to be read by others. So I don't sympathize with desire to make a deceased person's private things accessible, even to family. Let it burn. It might have been the owner's intention all along.
The problem here is that a lot of the stuff stored on phones falls somewhere between "dies with me" private and "should pass on to my family" private. Or "should be recoverable if I lose my key" private.
Strong encryption makes it impossible to recover in the event of a lost key or pass on to family in the event of your death. So that's not necessarily a great default for, say, decades of family photos. It would be a huge tragedy if that was lost.
The good news is that Apple does provide tools to opt-in to stronger security, rather easily. For example, the Notes app was recently upgraded with note-level strong encryption. That might be a good solution for your most private notes, without endangering the survivability of your digital memories and assets.
Would you really expect Apple to recover the data in this scenario for the next of kin? I certainly wouldn't, and I wouldn't want them to.
If there was something the deceased person truly wanted hidden from their next of kin, they could use stronger encryption for that. The Notes app, for example, allows for note-level strong encryption. But it's not an ideal fit for the more typical use case.
Anyway, they do it.[1]
[1] http://www.cnet.com/news/widow-says-apple-told-her-to-get-co...
I certainly don't know if Apple should, without a court order, share any of my data that I haven't explicitly shared with next of kin if I passed away.
I'm wondering though - what happens if I stop paying my $2.99/month for 200 GB - will any of my existing photos be wiped out of shared photostreams?
The problem with requiring explicit sharing is that a lot of people don't realize they need to do it to properly navigate these future events. Just look at how many people fail to write wills. You wouldn't want real-world assets to automatically get destroyed because you failed to write a will, even though there may be some things in there you didn't want to pass down. The "failsafe" mechanism there is, a court figures it out. So that's apparently what Apple is doing.
But keep in mind this is not just about next-of-kin. It's also about the ability for you to recover your life if you forget your password. That is why Apple will always have a "backdoor" into iCloud.
A pretty interesting point.
Photos are probably good to recover... unless they were photos of something horrible you did (beat up someone, sent photos of your anatomy, etc.).
What about text messages? Again, it could express what kind of person you are. Do all iOS users have unwitting diaries that will be unlocked at our death in the form of our iMessage and SMS history?
In 400 years, will our ancestors point out, "Wow, great-great-great-grandma was pretty awful, did you see this text they sent once?" in a way that removes context from the message written at the time. This is something we can't know about our ancestors... and probably for the best, since otherwise we might be disappointed in our ancestors.
Or maybe that's an okay thing?
But for bank account records, most photos, etc., you probably don't want those to disappear in the event of your death. You want those to pass on.
Given the choice between the two defaults, it makes a lot of sense for Apple to make "accessible to next of kin" the default, and "dies with you" the opt-in.
And may the era of homomorphic encryption schemes come and thus render moot the need for Apple and other companies to access unencrypted data as a plausible excuse when performing back-end processing/recovery on their client's data.
edit: well, to correct myself, as you said that wouldn't obviate the need for the feature of "recover data without password, after passing some other security tests"
Apple could make truly secure systems user friendly if they wanted to. It seems they may see some value in doing so.
It's just not an option that your average person would want for their family photos.
Q: Can't I already encrypt my iCloud data via a keychain?
Having said that, you can add another layer of your own encryption to certain data that is stored in iCloud, like for example the latest Notes app in iOS 9.3. Apple won't have that key.. but the app warns you the data will be lost if you lose it. You could also encrypt files you store in iCloud Drive using an encryption app. But you wouldn't be able to do this with other data that is managed by iOS like iCloud backups or photo libraries.
Since 197X, people had home computers (and institutional computers for two decades before that) on which the FBI could install anything they want, if that equipment fell into their hands. This fact never made news headlines; it was taken for granted that the computer is basically the digital equivalent of a piece of stationery, written in pencil.
There is nothing wrong with that situation, and on such equipment, you can secure your data just fine.
No machine can be trusted if it fell under someone's physical access. Here is a proof: if I get my hands on your device, I can replace it with a physically identical device which looks exactly like yours, but is actually a man-in-the-middle (MITM). (I can put the fake device's board into your original plastic and glass, so it will have the same scratches, wear, grime pattern and whatever other markings that distinguish the device as yours.) My fake device will collect the credentials which you enter. Those are immediately sent to me and I play them against the real device to get in.
Apple are trying to portray themselves as a champion of security, making clueless users believe that the security of a device rests in the manufacturer's hands. This could all be in collaboration with the FBI, for all we know. Two versions of Big Brother are playing the "good guy/bad guy" routine, so you would trust the good guy, who is basically just one of the faces of the same thing.
This is already the case. Right now, only firmware signed by Apple can be installed. The next logical step is to build a system where the unit that deals with PINs cannot be updated at all, or at least not without wiping all keys. This would prevent any non-invasive attempts of bypassing the rate-limiting of PIN attempts or auto-wipe.
> There is nothing wrong with that situation, and on such equipment, you can secure your data just fine.
Again, this is also true for an iPhone with a sufficiently complex passphrase, Because Crypto™. Secure Enclave is just an additional layer that protects against everyone not in a position to get custom firmware signed by Apple.
> No machine can be trusted if it fell under someone's physical access. Here is a proof: if I get my hands on your device, I can replace it with a physically identical device which looks exactly like yours, but is actually a man-in-the-middle (MITM). (I can put the fake device's board into your original plastic and glass, so it will have the same scratches, wear, grime pattern and whatever other markings that distinguish the device as yours.) My fake device will collect the credentials which you enter. Those are immediately sent to me and I play them against the real device to get in.
The scenario here isn't an Evil Maid Attack. It's about protecting locked devices while someone else has physical access to them. Right now, you're fairly safe from most attackers in this scenario. In the future, with a read-only Secure Enclave, you're also safe from Apple and anyone who could force Apple to sign firmware. The fact that Evil Maid Attacks are harder to pull off because of this is just a nice extra.
> Apple are trying to portray themselves as a champion of security, making clueless users believe that the security of a device rests in the manufacturer's hands. This could all be in collaboration with the FBI, for all we know. Two versions of Big Brother are playing the "good guy/bad guy" routine, so you would trust the good guy, who is basically just one of the faces of the same thing.
This doesn't make sense. There's no crypto backdoor. The worst case scenario for their current security architecture is that it falls back to how FDE works on a desktop system - i.e., it's completely dependent on your passphrase complexity.
How do you plan to flash all the HDD/USB/Network controllers? Not to mention the CPU/GPU microcode, and countless other random chips inside your computer that are executing firmware you have no access to.
We're already hosed. Its just a matter of whats considered a 'reasonable' barrier.
I don't care whether a given processor is microcoded via a tiny ROM, or whether it is all hard-wired gates; the difference is just in the instruction execution timings.
We are not "hosed" in any way by this.
As soon as the microcode is writable, then we have questions: can anyone write any arbitrary microcode and put it in place? Or is there some tamper-proof layer containing that only accepts signed microcode, and who has the keys?
I'm not well versed in security so excuse me for my ignorance but what if there were a way to solder chip onto the board that allows access to the secure enclave. Every time an iphone is made a companion chip is produced that contains some kind of access key which only works for that device and someone is required to foot the bill for storing them.
This is something Apple practically guaranteed by using platform DRM to turn themselves into a critical single point of failure.
CALEA was extended to ISPs once ISPS consolidated enough; now that Apple has consolidated central control of mobile devices in a similar fashion, it seems quite likely that extending CALEA to cover smart phones will be on the table.
I'd be extremely surprised if Apple's management wasn't very aware of the CALEA precedent, but they chose to go down this road anyway. I find that rather unsettling.
But unless you can point at - This is Bill, we are targeting Bill, we have a warrant for Bill, we need 1 phone that we will make sure becomes Bill's - All Writs cannot help.
If Bill mails order a new iPhone they can compel apple store to give him compromised device. They could probably put FBI team presenting themselves as store employees in every store if Bill is high value enough target and expected to buy iphone today.
But they cannot say - compromise all of SF Bay Area iphones because we expect one of them to be bought by Bill.
Some of the lawyers here correct me if I am too wrong.
- George Orwell, 1984
- Apple, 2016Today, we celebrate the first glorious anniversary of the Information Purification Directives. We have created, for the first time in all history, a garden of pure ideology—where each worker may bloom, secure from the pests purveying contradictory truths. Our Unification of Thoughts is more powerful a weapon than any fleet or army on earth. We are one people, with one will, one resolve, one cause. Our enemies shall talk themselves to death, and we will bury them with their own confusion. We shall prevail!
https://www.youtube.com/watch?v=R706isyDrqI
They should re-run this commercial for iPhone 7. "On September 24th, Apple Computer will introduce iPhone 7. And you'll see why 2017 won't be like 1984."
@Udik: I could just keep my tax documents in printed plaintext on top of my dresser but I opt to keep them locked up. Privacy and security are important. If people who utilize privacy/security tools are up to no good then why does the U.S. Gov't have a clause for not revealing information due to State Secrets? Why do we set our Facebook profiles to private? Why have passwords at all on anything? Are you beginning to see the point?
For many customers of hardware and software trust is what is being sold.
As trust is eroded 'good enough' is no longer good enough. The only way to continue to be trusted is to be more secure, and as the grandparent points out the endgame there is that the encryption puts the software and hardware beyond the reach of the company that produced it.
You don't compromise there, it's against your customer base and your product positioning, and furthermore it dilutes your brand.
https://news.ycombinator.com/item?id=10906999
Apple is far from having a secure phone right now. NSA certainly has ways to bypass this based on my attack framework and their prior work. They just don't want them to be known. They pulled the same stuff in the past where FBI talked about how they couldn't beat iPhones but NSA had them in the leaks & was parallel constructing to FBI. So, the current crop are probably compromised but reserved for targets worth the risk.
That said, modifying CPU to enable memory + I/O safety, restricting baseband, an isolation flow for hardware, and some software changes could make a system where 0-days were rare enough to be worth much more. Oh yeah, they'll have to remove the debugging crap out of their chips and add TEMPEST shielding. Good luck getting either of those two done. ;)
Do you have a link to a leak that shows this? I couldn't find anything with a simple google search.
I want it all to go when I do. Hell, I want some of it to go now.
After I'm gone, I want to leave no part of my existence on the internet.
I realize that's not possible. But I want to minimize my footprint.
It is totally possible for a local device. I have a deadswitch on all my computers. If I don't log in and set an alive flag via the command line in any of my computers for more than a week, that computer securely wipes itself.
Let it be known, I have nothing to hide. I just think this is the best way to do things.
Edit: My reason for this is the frequency with which I encounter people who are no longer alive. It's a harsh thing to look at a link to someone who said something, and you used to know and then suddenly realize, "Oh shit. He's dead. And I used to be his best friend."
I know facebook has memorial pages, but those are difficult to get.
Private information is another matter, but when people presume they have rights to choose how others think, it really makes my blood boil.
http://www.b-list.org/weblog/2013/jan/29/persistence/
Since then I've started noticing services rolling out the ability to specify someone to take over your account after you die, and I suspect the legal framework around wills and estates is robust enough that you could leave instructions (and have them enforced) to delete things.
What encryption and security really does is create scarcity of access to information and data in order to force a market solution where government groups have to prioritize their efforts and apply them deliberately.
The only reason previous wiretapping laws were passed is because they weren't in the limelight and the public never had a chance to weigh in. Let's make this an election issue
Unless it breaks DRM!
USA FREEDOM was passed fairly specifically because the issue was in the limelight.
Nothing is 100% proof, crypto certainly isn't. It's going from child's play to "you actually need to knowledge" to "this is actually hard now" (but.. not impossible).
Passphrases suck enough whenever you have to log back in. Are people really gonna put up with that every time they want to use their phone?
On the other hand, if there were a convenient way to toggle between passphrases and 4-digit unlock, (especially if you had to use the passphrases to toggle back to 4-digit) then I would be all for it.
Low-friction UI has been Apple's differentiator for years. If you have to take the time to type out a secure passphrase every time you want to interact with your phone, people will stop interacting with their phones (or use phones that suck less).
Why do you think they built Touch ID? Because typing even a 4-digit PIN is too annoying!
Longer answer: There's a key that encrypts the actual data, and that key is stored on disk, but encrypted with your passcode along with a hardware key. The hardware key cannot be read, only used to decrypt. Changing your code just changes the key stored to disk, but not the encryption key, so it's quick, but preserves security.
Longest and most accurate answer: https://www.apple.com/business/docs/iOS_Security_Guide.pdf from page 10.
It's such a grey area and I will probably get down voted for commenting this way. I 100% agree that the power, in the wrong hands, is horrible, but can't we talk about this in a way where there's some kind of middle ground? All I've been reading are either extremes.
They give you the choice.
I'm not sure it's a perfect solution but might be better than counting on someone to reverse engineer or hack into your phone.
If you're serious about encryption you should always have a backup key somewhere... unless you want a single point of failure (you). Both should be an option.
I was a bit surprised by the clickbait-y nature of the HN title, but we can see in the nytimes URL that this "Apple Is Said to Be Working on an iPhone Even It Can’t Hack" was the original title, eh.
1. http://www.pcgamer.com/john-mcafee-on-his-fbi-iphone-hack-of...
2. http://arstechnica.com/staff/2016/02/mcafee-will-break-iphon...
edit: added source #2; see Google for additional sources...
"Balanced" compared to what? To the 80% insecurity we have now? And "balance" for what protocol? For all existing protocols? For all future protocols? What if hackers learn how to exploit that "balance" in a massive way? Will companies be allowed to fix it by improving the security or will they be "impeding law enforcement"?
It's unbelievable to me how hard the government is fighting against basic security.
It's not an attainable goal in practice. Today they generate a per device customized update that can be installed without user intervention. Even if they tomorrow enforce user intervention they still retain the capability to push a targeted update for a specific device on law enforcement/court order. The user has no way of telling what the update did.
(although there's a whole separate set of legal attacks unexplored)
Again, it is objectively very strange to not even hint at what the source of your information is. But it's also standard practice.
Full disclosure I understand this was a persons work phone. This is a statement which is solely being posted to stimulate theoretical discussion.
(Somehow, I feel iMessage and related apps are MITMable because there is no mandatory, mutual, out-of-band validation of a recipient's identity.)
(of course if the phone is not in use anymore it doesn't apply)
If the software (Android) had the same type of protection (if the wrong PIN is entered 10 times it destroys the key), would this device be at par with the iOS approach?
If Apple can't launch new iOS versions, can they still launch new iPhones?
What if the feds decide that an O/S update closes a zeroday that the NSA was using (note they've been really quiet here) and interferes with an FBI investigation in process?
And yeah, DOJ keeps saying it's just the one device, just this one time. What happens if they suddenly change course just to prevent iOS from getting more secure?
There are however those pesky share holders to keep happy.
Had he lost to the DOJ, here is what would (might) have happened:
- he would gladly unlocked this phone and bill DOJ for the time spent on redesigning IOS
- going forward, he would label each phone's box in red letters: CONTAINS GOVERNMENT-REQUIRED BACKDOOR (I doubt Gov can forbid him from doing that)
- he would then stop selling devices in Apple stores directly and only allow to order them in stores with direct home delivery from Apple website hosted and operated outside USA.
- all the shipping would be done directly from China by-passing US-tax system all together.
- shortly after he would remove the backdoor IOs for devices that are not directly sold on US soil
That would be a big fat middle finger to the DOJ.