Okay I'll leave something behind as well. This is my favorite sec-conference video of all time: [HOPE X] Elevator Hacking: From the Pit to the Penthouse https://www.youtube.com/watch?v=rOzrJjdZDRQ Closely followed by DEF CON 18 - Joseph McCray - You Spent All That Money and You Still Got Owned... https://www.youtube.com/watch?v=_SsUeWYoO1Y
Deviant Ollam also has quite a few other interesting talks.
Does anyone want to comment on how feasible it would be to defend against stuff like that in both hardware and software?
E.g. in software, instead of just storing an address in memory, store a tuple. Something like (address, ~address). Validate each tuple on use, i.e. (address ^ ~address) must result in all bits set. That's obviously a naive thing, but there are probably similar relatively low overhead things that can be done.
Same with hardware. It wouldn't be too difficult to store a parity bit to accompany each register byte. Any hardware glitching that flipped register bits would tend to result in parity errors. Parity checks are not very secure when considered individually, but collectively it would be very difficult to glitch the hardware without introducing massive numbers of parity errors.
Remember, your not causing the memory to go funky, your messing with the processors reads from registers and cache. So if the next instruction says "jump not equal 0" and the attacker wants "jump not equal 1" parity can't really help you in that scenario.
If you add parity, then the attack just needs to glitch that parity check out.
Hardware defenses are the best. In particular it looks like apple is using a "PLL". It has been a while since I worked with all this stuff, but I believe the PLL makes clock glitching impractical.
For voltage glitching, I'm sure they have components that monitor voltage and either smooth it out, or just shut the chip down if it sees something weird.
There are also known reasonably good defenses against glitches in PC that involve checking at every end of a basic block that a subset of instructions of this block has executed.
Of course, since the Year of Snowden, I now assume that any "theoretical" attack vector has a Team, a Project Manager, and a half-completed Kanban board somewhere deep in the NSA…
- They haven't found any actual attacks.
- Because the secure enclave runs so little code, there's very little attack surface in software, and much of what there is (mainly message passing between the secure and normal environments) seems solid. The only likely possibility is the wrapper code around IMG4.
- However, if there is an exploit in IMG4, there aren't many mitigation techniques (stack canaries, ASLR) built in, so it would be likely to succeed (again, conditional on there actually being an exploit)
- Attacking hardware might be possible, but mainly on older devices, because the >= A8 chips have extensive protection against side-channel and power analysis attacks.
- One "game over" possibility would be blocking the "fuse signal" that tells the CPU that the secure enclave has been compromised. This would allow for replay attacks. However, this would require extremely capable hardware for both analyzing the chip lines and actually performing the attack. If it's possible at all, it would definitely be restricted to NSA-like scenarios.
They do conclude that the iPhone security features seem "light-years ahead of competitors" (their words), and coming from a Blackhat presentation, that actually means something.
Hard to say. The sorts of people who are capable of finding these kinds of exploits don't advance their careers by publishing.
Wonder on the FIPS-140-2 level, where this chip would fit?
Of course that might raise development costs but that seems like a fair trade off in this case, especially if it causes some "features" not to be implemented because they would be too hard.
A lot of the complexity they're documenting is in hardware: the AES-XEX memory encryption scheme that protect's SEP's memory from the AP (or any other component of the system), the fuse array that controls its settings, the memory filter that restricts AP reads/writes to the mailbox range.
Still more of the complexity is in the AP and in the AP's interfaces and drivers. That's real complexity, but it's outside the SEP's TCB.
Everything is a trade off I guess...
However, most vulnerabilities in these applications are simply too valuable to announce to the world. They are generally purchased by offensive cyber operations companies or the government before they could be explained at e.g. Blackhat.
This is one reason it's crucial to have bug bounties. Apple can easily pay more than say, Endgame. If they bought exploits in the same fashion and at the same price point as the defense industry, their security posture would improve tremendously.
As the secure enclave's surface is much smaller than the application processor's you'd also have to focus your protons/alphas/heavy ions very well, because otherwise you'd affect the logic outside of the secure element much more often.
I think this approach in practice is much, much harder than to just shine any random radioactive source on it.
I always welcome feedback! :^)