product-security@apple.com
They also respond to security@apple.com but prefer the product-security address.
Further, there are any number of legit bug bounty programs out there like ZDI that would pay for a bug like this then immediately disclose to Apple for it to be fixed.
Disclosing an 0Day root authentication bypass vulnerability on Twitter isn't cool, even if it is local: think of the impact to shared iMacs on university campuses.
This isn't the first extremely serious and dumb High Sierra password bug this year [1] [2], and unless Apple is severely hurt by it, so they're forced to change, it won't be the last. High Sierra is full of bugs and seemingly not just annoying bugs, but also security bugs.
Let's hope Apple gets sued for the damage they'll cause by including this bug in High Sierra so they make sure that next release of macOS won't be another bug filled mess.
[1] https://arstechnica.com/information-technology/2017/09/passw...
[2] https://www.macrumors.com/2017/10/05/macos-high-sierra-disk-...
Encouraging irresponsible disclosure because one wants to see Apple hurt is a reckless and selfish attitude because it puts millions of Apple customers at risk in the process.
I don't want to see Apple hurt (I'm an Apple-guy myself, using Macs, iPhone, iPad and Apple Watch), I want to see them improve. I doubt they start will start caring about QA unless they're forced to.
One absurdly serious and stupid password bug like this can be a honest mistake, but three (that we know of, that were full disclosures) in a few months is negligence that should be criminal if it isn't.
I mean, this bugs has been reported already - by every cheesy hacking movie ever, by every beginners book on social engineering and so-forth. Heck, it was "reported" by Richard Feynman talking about cracking safes during the Manhattan.
IMO, this behaviour is part of the problem, the reason why tech companies take security only on a superfiscial level seriously.
Don't kill the Messenger.
> it puts millions of Apple customers at risk in the process.
Nah, it's Apple which put millions of customers at risk, not the person who disclosed the vulnerability. let's not shift away the blame from the guilty here.
Apple one of the richest company in the world is obviously just cutting corners in QA here. This is unacceptable.
it's seems some people here are more concerned about negative publicity than user security. This is a pattern that have been seen countless times in big tech corporations(such as Yahoo), not disclosing hacks that put their users and their data at risk. This is unacceptable for a company that claims to be all about their users.
It's not a bug; it's a bad design decision. How to initialize the root password on a new machine is a hard problem in a consumer environment. Some people will set it, lose it, and then want support to fix it. One would expect some clever Apple solution, such as initializing the password to random letters and providing the buyer with that info on a scratch-off card. That way, the buyer can be sure no one has seen the password before they use the scratch-off card.
Setting it to null? That means nobody thought about the problem.
Apple put millions of their customers at risk by skimping on QA. As an Apple user I'm OK with this getting out if it motivates Apple to improve their approach in the future.
Edit: as usual, downvotes but no response. I miss when this place was decent.
I'm a die-hard Apple user myself, but I agree that the long list of severe bugs in High Sierra is absurd, and a big public backlash might be enough to kick them into gear. On the other hand, I, a university student with next to no understanding of computer security, can simply walk onto campus, sit down at a Mac, and within seconds have complete access to the computer. It's ridiculous, it's horrendous that it shipped like this, but it's not something that needed to get out, especially something so easy to utilize.
Us geeks have been complaining about the horrible QA in macOS for years, yet nothing has been done. The fact that this is so simple to do will probably/hopefully get ordinary people to start talking about it too ("Hey, have you heard that you can hack Macs without a password? Very insecure"), which would force Apple to improve.
I don't have any experience with enterprise-grade IT, but it seems like shared computers should be thin clients or at least use UEFI to securely boot an image over the network and not keep anything sensitive locally.
If you give someone physical access to a box, they will be able to own it.
its educational for the end user. You cannot trust Apple. Good reminder there are other OS available out there.
One would think that something as simple as a login would be deterministic.
How would you feel if someone discovered a 0day at a company that exposes credit card and identity info, published the 0day, then hackers steal all that info (including yours)? I'm sure 'creating a thunderstorm of negative publicity' would be the last thing you would want.
You mean, in addition to bad QA and complete disregard for their users' security? And being the richest and most profitable company ever, cutting corners and evading taxes?
Their response on Twitter was amazing: "PM us so we can discuss this privately", not "thank you, we're looking into it NOW".
If so, why? How do you identify companies like Apple that get one set of rules to other companies?
Yes Apple shouldn't be having this issue but disclosing a 0-day issue can possibly hurt users far worse than hurting Apple. Apple may lose a tiny bit of money but users could lose far, far more especially if someone develops a good way to remotely deploy / take advantage of this defect.
Ignoring responsible disclosure also limits the ability to sue them for any damage resulting from it (or so I'm told by one of my lawyer friends who thinks this disclosure may make it almost impossible to successfully sue them over it unless it simply takes them too long to fix).
How can that happen in any case ? Isn't pretty much the first line in every license waiving of liability ? Unless you have some special contract with Apple that overrides other standard boxes that you ticked, how would anyone sue ?
It's about protecting systems RIGHT NOW from immediately causing harm to people's lives.
https://www.eff.org/deeplinks/2017/10/drms-dead-canary-how-w...
Blame the DMCA. This guy is in Turkey - does GP really think he can expect fair treatment and equal compensation as a "western world" security researcher?
The fact that we know about it means we can take steps to mitigate the damage.
There is blame on both.
If you leave your key in your front door lock and I blast out on twitter your address and tell people about it, I think I have some responsibility.
Other than buy an Apple product, the users did nothing intentional to undermine security.
Since this is a subjective argument, based more on historical instances of "responsible disclosure" and not law, I'm gonna lean in this case of it being Apple that failed
They built the entire "walled garden" without getting outside help. They want the control, they have billions of dollars, can hire whatever talent...
Failed to spot a password-less root login issue.
People need to know today to be even more cautious about using Apple gear in public places or around plain ol' tech jerks that like to fuck with people for a gag.
Society has no legal or moral obligation to make sure Apple stays in business.
The main question that should be asked is, how did this get overlooked? How is it that your average website has better password security than the OS of one of the richest tech companies in the world?
To be fair to Apple, Microsoft had similar issues back in the 1990s. Perhaps it takes a string of security blunders for some tech companies to take security seriously.
I'd lay responsibility at the lockmaker's door, not the guy who told everybody they were at the mercy of anyone with a toothpick.
That's not a faithful analogy. Apple isn't your neighbour. They are the landlord. The scenario is more like that the landlord uses bogus locks in your complex, and you post it on twitter. You could complain to them privately too, but given your past experiences perhaps, you thought that twitter would be a more effective medium.
There is no realistic way to keep a lid on something like that and so in this case the blame is entirely on Apple.
asejfwe8823 24 minutes ago [dead] [-]
A better analogy would be "if the lending bank left the door to your new house open..." Other than buy an Apple product, the users did nothing intentional to undermine security. Since this is a subjective argument, based more on historical instances of "responsible disclosure" and not law, I'm gonna lean in this case of it being Apple that failed They built the entire "walled garden" without getting outside help. They want the control, they have billions of dollars, can hire whatever talent... Failed to spot a password-less root login issue. People need to know today to be even more cautious about using Apple gear in public places or around plain ol' tech jerks that like to fuck with people for a gag. Society has no legal or moral obligation to make sure Apple stays in business.
https://www.theregister.co.uk/2016/08/05/apple_joins_the_bug...
The idea of responsible disclosure is to minimize harm for you, the user. Not to minimize bad publicity.
The question is large and complicated, and people can agree to disagree. There's nothing wrong with tweeting vulns: The company is at fault, we can defend ourselves now that we know about the vuln, and it's a big PR disaster for Apple.
A past conversation: https://news.ycombinator.com/item?id=14009937
No, no it's not strictly more ethical. It's not even strictly safer, which should be an even easier question to answer. The baked-in assumption in your logic is that users have no options other than waiting to patch. But, obviously, they do, and keeping vulnerabilities secret deprives them of those options.
> people are under no obligation to
> report vulns privately
Legal obligation, no, you're right. Moral obligation? Why not?Project Zero (and infosec professionals, at least all of the ones I've ever worked with) would tell you that this was the most irresponsible way to handle the issue, short of not saying anything and selling knowledge of the exploit to someone other than the vendor who could fix it. Publicizing something like this in this way is something people do because they want publicity for themselves. It is not something someone does if their biggest concern is for the users who might be affected by it. It is something someone would do if they didn't care about the users, and just wanted public credit for pointing it out.
This vulnerability is ridiculous, unacceptable, and braindead to execute.
They've been quick (within 45 days) to patch every major bug I've reported to them and where the bugs were cross platform, impacting Windows, Android, etc., they've consistently been amongst the quickest to issue a patch so I'm not sure how you qualify that statement.
Although for what it's worth last time I reported a security vuln to Apple using their official process they took around 2 years to fix it (admittedly low priority security vuln, passwords being sent over http).
Wait, what?
The other day I actually ran across some AWS docs which suggest you send your AWS root key id in the url of http requests:
http://docs.aws.amazon.com/AlexaWebInfoService/latest/index....
His twitter account tells that he is an agile software craftsman, turkey founder and a community guy. And he tweets about devops, open source and other stuff.
An average person disguised as a software developer?
The thing about bug bounty programs are that they are not a negotiation. They decide how much your information is worth--take it or leave it.
If you thought this bug was worth $25,000 and you feared that Apple might offer a $100 discount coupon plus a lovely "I Love My Mac" coffee mug, is there any way to start a negotiation without being accused of extortion (if you imply that you might disclose it publicly)?
This is a serious question: Is there any way to negotiate for security bugs, before or after disclosing all the details, without running a legal risk?
In general, you have to rely on this being a repeated game - you and the pentester community at large submit lots of bugs to this company, and you rely on them to make it worth your time and talent. If they don't, you go test someone else's software. Reputation is everything.
It gets the word out quickly.
Releasing proprietary software with such a hilariously insecure authentication system isn't cool. This isn't free software, produced by people & corporations out of the goodness of their hearts; rather, it's something for which people pay a good deal of money and which they have a right to expect is at least somewhat secure.
Getting the word out, fast that a) there's a huge insecurity and b) it's in Apple software provides benefits to those running macOS (so they can fix their systems) and to those considering running macOS (so they can evaluate whether an alternative is more appropriate).
Instead of trying to control behaviour of every single human being in this world and demanding of them to do things in a certain way - which is, was and will always be impossible it is much more favourable to establish the expectation that a zero day vulnerability might be dropped every week and have businesses (vendors and clients) be prepared for it so it can be handled adequately.
You may know the protocol, security researchers and people in the tech industry may know that, but why is an ordinary Joe expected to know, or research, that email address and/or the protocol regarding 0-day vulnerabilities.
It's the same logical line of thought that leads people into turning wallets into the lost and found (or an authority) instead of just pointing at it on the ground shouting "hey look, a wallet!" then walking away.
I'm just curious how much of a payday this guy missed out on by not disclosing responsibly.
In the course of developing my current application, I've discovered a couple security bugs in macOS, which I reported to Apple product security in PGP-encrypted emails. The only thing offered to me was to have my name/company listed in the release notes (which they are, for the latest 10.13 update, along with a CVE#).
I'm sure Apple will in the future.
no one is under any obligation to sweep company's security problems under the rug for them.
If companies create incentives for people to share vulnerabilities with them first, great, but no one is under any obligation to participate in those programs.
Don't ship broken software if you don't want pie in your face.
Just because the vulnerability is not disclosed does not mean it is not being actively exploited. It probably is.
Users who don't care about their security do not deserve to be "protected" at the expense of compromising the security of those who do care who benefit from immediate disclosure.
And at the same time provided everyone with a simple, free, and perfect way to fireproof his/her house.
You could wait for the fire department, which may take hours to get there, and hope that no malicious party down the street saw the fire, or you can do this. It turns out that both are quite reasonable reactions in this scenario, and that the latter is much more obvious to the layman.
There is no good reason to get angry at the layman for taking a course you don't prefer.
Once the root user has been enabled locally, the only sharing settings I found to permit anyone remote access with the root/null combo is Remote Management.
How dare you publicly shame him and risk his future employability like this? It's only responsible of you to contact him quietly and directly so he can correct his mistake and cover it up so nobody needs to know.
It's like there's one rule for the negligent global corporation which happens to work in the corporation's favor and shames the public for speaking to each other about their salary, I mean flaws in their software, and another rule for ordinary people giving a heads up to people who are fair game to pile on.
The PGP key is available here: https://support.apple.com/en-us/HT201214
I wonder what percentage of emails to security@... (apple.com or otherwise) are sent encrypted...
And the only way Apple gets motivated to fix either of those two things is massive Pr damage.
Human fallibility, yo.
It seems to me (somebody who has no chops in this domain) that this is such a basic bug. Like something a child would have found just messing around.
And it came from a corporation that has around $200B of cash and cash equivalents. Apple has the resources to test and find bugs like this.
That Apple didn't find it is down to leadership and priorities more than some inherent limits of producing reliable code. One spends on what one thinks important.
But who knows, I've got no domain expertise here. Maybe a fifth of a trillion dollars C&CE really isn't enough to fund production of more robust code. But really?
Source: Information Security 101
There is no justification for releasing a 0day publicly.
It's trivial to find. He can't presume he is the only one who found it. Telling any individual that doesn't have malicious intent is a good thing, therefore telling everyone is a good thing.