Yikes. Especially looking at the diff of the original problematic fix, it seems like they slapped a quick patch on there and called it a day, instead of investigating to find the underlying architectural issue. Doesn't really inspire a lot of confidence that the resolution for unc0ver is any more thought-through. I wonder if they've identified the root-cause? That'd be the real interesting piece to me.
Apple takes the approach of throwing humans instead of automation at a problem quite frequently [1]:
> The press release mentions RMSI, an India-based, geospatial data firm that creates vegetation and 3D building datasets. And the office’s large headcount (now near 5,000) [used to create Apple Maps]
The lack of automated testing is something Apple is working on fixing, but they're a ways away from having anything substantial. The terrible iOS 13 release quite significantly bumped up the internal priority of stability and testing. iOS 14 is likely to be far less buggy than iOS 13 because of this culture change.
[1]https://arstechnica.com/information-technology/2019/09/apple...
The statement is true but is not a root or a cause. Its a high level description of what happened.
> Still, I'm very happy that Apple patched this issue in a timely manner once the exploit became public.
Sh- should we be happy Apple fixed this so quickly? unc0ver allows consumers to get more out of their Apple devices, and Apple's fix isn't really optional (unless you disable auto-updates and tap "Later" on every update notification). Is this exploit even an issue? Apple's probably not going to let an app exploiting this zeroday into its App Store and sideloading is difficult; it's very unlikely someone malicious is going to trick people into installing malware that uses this exploit. It sounds to me like Apple is purposefully limiting consumer freedom by actively trying to prevent jailbreaking.
Jail breaking cuts into their profit a small amount because the community is small.
https://www.reddit.com/r/jailbreak
The benefits are very much worth it though. Most have had iOS 13 features since iOS 11/12. They have iOS 14 features now. Then there are other features that may not be released ever but people find them invaluable.
Ex: Per app specific Firewall per website. (Block tracking/ads)
-Disable apps ability to spy on your clipboard.
-Disable apps from accessing things you do not want them to but still launch.
-Themes, so many options: remove your status bar or put new things there.
-Custom widgets
-Detailed wifi, phone information
-Download old versions of apps because the company broke something.
-Detailed phone/memory/cpu info
-Terminal access.
Those are just a few off the top of my head.
Not even to fan boy, but half of those are things that android did from the go and the rest have been added or are generally easy to do.
Fixing bugs used to attack people means fixing bugs used for jailbreaks. There isn’t some magical mechanism by which a jailbreak exploit isn’t exploitable but anyone else.
The current model fails to protect people anyway while providing an extremely strong incentive for the community to publish software that undermines the “security” of the device.
I believe all these begin with browser or existing-app based exploits.
None of them seem to rely on tricking the user into installing a new app. That would be too suspicious for the user, and would entail the attacker uploading their exploit code to apple, and giving apple a full list of users who they exploited...
Are you sure about this?
I'm far removed from the app store development world, but a cursory glance at the description and the original lightspeed bug seem to indicate this is a problem within the kernel interface, and as such I assume callable by any application??
Sorry, I could be missing something, just curious why this couldn't occur in the app store.
> I wanted to find the vulnerability used in unc0ver and report it to Apple quickly in order to demonstrate that obfuscating an exploit does little to prevent the bug from winding up in the hands of bad actors.
Of course if he was this talented, surely he would routinely diff new kernel versions and realize the old bug had been reintroduced before having to rediscover it in a jailbreak?
Currently, sharing security issues with Apple is a guaranteed that the tooling you are using to get access to your device won't work anymore, there's definitely an bad incentive to not report security flaws at the moment.
Some of us do that for this exact reason. I wish there was a way for me to just pick software to give root to though, this is way less secure.
As a side note, it's disappointing to see so much unfounded criticism here in the comments. Apple was going to find and fix this bug quickly, regardless of the author's efforts. In this case we get a peek into the inner workings of the exploit discovery process that would otherwise remain secret. The author and Apple both clearly noted that unc0ver was the source of the exploit, and the author made no attempts to hide that fact. Calling the author of this blog post "lazy" or an "informant" is out of touch and uncalled for.
> By 7 PM, I had identified the vulnerability and informed Apple
I don't know why this rubbed me the wrong way. Like, it feels "lazy" (for lack of a better way) to disassemble an exploit and run off to tell the vendor. If anything, the exploit writer should get the credit. I don't know.
They did: https://support.apple.com/en-us/HT211214
For locals, why bother? The optimizer will probably discard the writes, and worrying about stack addresses being reused is a waste of mental space and clutters the code.
All counts are rough numbers. Project zero posts:
Google: 24
Apple: 28
Microsoft: 36
I was curious, so I poked around the project zero bug tracker to try to find ground truth about their bug reporting: https://bugs.chromium.org/p/project-zero/issues/list For all issues, including closed:
product=Android returns 81 results
product=iOS returns 58
vendor=Apple returns 380
vendor=Google returns 145 (bugs in Samsung's Android kernel,etc. are tracked separately)
vendor=Linux return 54
To be fair, a huge number of things make this not an even comparison, including the underlying bug rate, different products and downstream Android vendors being tracked separately. Also, # bugs found != which ones they choose to write about.
Thats a team of ~10 security researchers over many years...
Considering how many are being discovered each day/month/year, chances are that there are at least hundreds undiscovered...
If it only takes one to ruin your life, and a good security researcher can find one in a few weeks, or months at most, the barrier to someone evil is really really low...
s/good/extremely good/
This doesn't change the fact that someone evil will still probably find one.
The found issues will strongly depend who happens to be on the Google Project Zero team at the moment.
My post was to counter folks thinking P0 is a Google hit job, which seems to come up frequently on HN.
> My goal in trying to identify the bug used by unc0ver was to demonstrate that obfuscation does not block attackers from quickly weaponizing the exploited vulnerability.
And in fact, I will argue that this looks like it worked great: yes, someone--and of course, likely many people working in shadowy areas of organized crime, arms dealers, and government contractors--figured it out in hours, and they could have been malicious and used it to attack others. But the real question is then how many such attackers you enable and what their goals are. If you publish an exploit as open source code along with the tool (which some people have done in the past :/), you allow almost any idiot "end" developer to become an attacker: millions of people at low effort instead of thousands or hopefully even only hundreds (when combined with incentives, not just ability).
If you publish a closed source binary with obfuscation--one which is restricted to a limited usage profile (like if nothing else it isn't in the right UI form to "trick" someone into triggering it, or where what it ostensibly "does" is too blatantly noticeable) you limit the number of people who both have the time and incentives to work out the vulnerability and then rebuild a stable exploit for it (which is hard) down to a small number of people, almost none of whom (including the attackers) who are then incentivized to publish a blog post (or certainly code) until at least months after it gets fixed (as was the case here).
And so, as someone who had been sitting in the core of this community--where everyone is wearing a grey hat, the vendors are the "bad guys", and "responsible disclosure" is being complicit in a dystopia--and dealing with these ethical challenges for a decade, my personal opinion is "please never ever drop a zero day on the world without it being a closed source obfuscated binary" unless you want to drop the barrier to entry so low that you have creepy software engineers quickly using the exploit against their ex-spouse as opposed to "merely" advanced attackers using the vulnerability for corporate or government espionage.
Google doesn't micromanage employees by the half hour like many companies...
Ooof. Talk about running in circles. Either this was someone who is swamped with work and spaced out, or a new programmer who wasn't familiar with the original. Oddly, I feel bad for both of them!
> Thus, this is another case of a reintroduced bug that could have been identified by simple regression tests.
So maybe writing and reading useful commit logs is not such a bad idea after all :)
Reminds me of this quote: "Those who don't know their history are doomed to repeat it."
I often wonder what goes through the minds of those whose work helps companies exert more control over their customers. Maybe some of them are not so "obedient" after all...