Second, I'll put my neck out here a bit and say, I find myself agreeing with the author's stance. Namely,
1) Independently discovered vulnerabilities are not "owned" by the first to discover it. As a courtesy, you may defer to another researcher, or combine your efforts, but I don't think there's any requirement to do so.
2) 90 days notice is more than enough time to expect at least a cursory response when you say, "Has this bug been fixed? Shall I go ahead and disclose it?", and then again, "This is a heads up that I will be blogging about this on March 12, 2015 i.e 90 days after the initial disclosure unless I hear otherwise. Thanks!", and then AGAIN, "This is a reminder that this bug will be disclosed in 4 days :-)".
In Google's case, for example, it's not just 90 days notice, it's a 90 day deadline to fix. In this case, a simple, "no, we need more time, please don't disclose this" response on the 2nd issue could have avoided the whole problem.
Bug bounties, particularly a fully managed program through HackerOne, encourage Engineers to spend value time and resources investigating and writing up detailed reports of complex issues. If you sign up to run a bounty program, it's essential you give participants the time of day, like responding to their repeated inquiries about disclosing an issue.
It wasn't clear to me if the author was banned from HackerOne or just Slack's program. If the later, well, that's fine, Slack absolutely has the prerogative to invite whomever they like to participate in the program. I think they are missing out on a great contributor in this case though. If author suffered an outright ban on the platform, that would be distressing.
Lastly, if the 3rd vulnerability was unknown to Slack before author reported it, I think author should be properly compensated based on the terms of the program.
"Vulnerabilities reported to the CERT/CC will be disclosed to the public 45 days after the initial report, regardless of the existence or availability of patches or workarounds from affected vendors. " https://www.cert.org/vulnerability-analysis/vul-disclosure.c...
It's great that they said they needed more time, but they completely dropped the ball on that communication chain, and got more than enough chances to ask him to wait.
- Be ready to answer every single report on a short timeframe
- Be fair and provide feedback to the reporter
- Be nice, be thankful and reward the researcher if they deserve it
- Be patient with the duplicate reports and people just trying to get an unfair HoF
Otherwise it may backfire you and eventually it will.
These two issues (the hack and how they treat bug reporters) challenges the way I see slack as a company...
Its important to note up front that Bugcrowd programs differentiate between technical validity and rewardability, in order to maintain fairness to researchers and organizations alike. This means our customers only pay for issues with impact to them and researchers get solid technical feedback about their submissions.
We’ve found it is not uncommon to have a submission that is technically an issue, but lacks the impact necessary to reward it. In other words, it’s an ‘acceptable risk’ to the organization. Conversely, we’ve handled situations where features were ‘as designed’, but turned out to be major security flaws that were fixed. This process has helped our customers determine what to do in those cases. If you’re thinking about starting a program, we’d encourage you to differentiate between these concepts as well.
The Golden Rule(s): If you touch code or configuration because of the submission, reward the researcher. Here’s the detailed process:
1. Does the submission adhere to the terms and conditions? If not, mark as invalid and explain.
2. Was the submission communicated as an exception in the program brief? If so, mark as invalid and explain.
3. Has the submission, or its root cause been reported previously? Duplicates can and should be traced back to the code or configuration change that would resolve the issue. If that change has already been made, is in the queue to be made, or has been accepted as a risk, mark the issue as a duplicate and explain.
4. Is the submission technically reproducible? If not, mark as invalid and ask for clarification if you think it has technical merit. If so, it should be communicated as valid and you can move to determining the reward.
5. Will it cause you to make a code or configuration change now, or in the future? If so, it’s rewardable within the terms specified in your bounty brief. Issues with more impact should be rewarded at a higher level. Additionally, if it’s noted in the focus areas for the bounty, it’s worth more. If it does NOT cause you to make a code or configuration change, then provide reasoning to the submitter, and to the extent possible, push it to the brief as an exclusion for future testers.
It’s very important to consider the work of the researcher in step 5... If you think of other areas in the target that are impacted, or your security posture has been improved by discussion from the submission, we encourage you to reward it. If you’re on the fence, push the submission back to the researcher and ask for more information or how they view the impact. Remember, the researcher put effort into finding it for you, and it’s in your benefit to work closely with them and encourage them to go further.
The Golden Rule, transparency about your choices, and proactive communication should guide your judgement at all times.
Sorry other organization – I hope you guys move off some time soon.
User accounts and passwords are easy to reset and fix.
Credit cards are easy to reset and fix.
Years of private company discussion showing up in the wild? You're unlikely to ever recover.
I'd be angry but my company wouldn't be ruined. The worst thing I ever say in internal communications is to poke fun at a couple of our grumpier or more entitled users. I also probably curse slightly more than is entirely prudent. But, that'd be mildly amusing to have exposed, not ruinous.
Perhaps if you're saying stuff that you would be "unlikely to ever recover from" if it were shared with people outside of your company, maybe that's not the kind of thing you should be saying.
I'd be worried about company strategy, vision, intellectual property, keys/passwords, system infrastructure or other details leaking which could hurt our competitive edge, lessen our valuation, or expose our user's PII.
Maybe this more telling of HackerOne as a platform.
http://valleywag.gawker.com/slack-is-letting-anyone-peek-at-... https://news.ycombinator.com/item?id=8425799
Tbh, at this point I wouldn't be surprised if these "problems" occurred after someone discovered the bug and reported it.
http://slackhq.com/post/114696167740/march-2015-security-inc...
(And I can't help but wonder if these two issues aren't connected somehow. Someone may have been sitting on a 0day trying to do the right thing in expectation of a bug bounty program reward, only to discover exploiting or selling it was the only avenue likely to actually pay out... Surely this isn't just coincidental timing?)
We'd love to have a way to encourage security researchers to focus on our software and give us reports, but we're Open Source and our budget is miniscule. What is considered "insulting" as a minimum reward? What will actually get professional people looking at it with a critical eye? Is its popularity (~1 million users and a pretty well known Open Source project) enough to compensate for not paying very well for disclosures?
Hi, this is Ryan. I work at Slack.
Bug bounties are great, but managing them can be a challenge. Like many companies that run a popular bounty program, we receive quite a few vague reports, invalid reports, and reports generated by automated scanners. We work through these daily to ensure we are focused on the bugs that can have an adverse impact on our users.
We have positive interactions with the people who report bugs, and we appreciate the hard work involved in uncovering issues. If you find a bug, report it via HackerOne and we will reward your work. We have rewarded researchers for over 300 bugs found so far!
Anshuman sent us the first report in December. At a glance his report appeared to be well written and detailed. When triaging bugs, those two things are especially helpful. (We appreciate well written POCs!) We reproduce every report received, so below I will convert his report into a description of the problem and a series of steps needed to reproduce it (original report quoted).
------------
From the report:
“Slack users are allowed to share files (posts, snippets) with other users and within channels.”
True
“When a file is shared in a channel and unshared again, it is clearly mentioned on the website that: Un-sharing the file will not remove existing share and comment messages, but it will keep any future comments from appearing in the channel.”
True. This is what the un-sharing feature does. As stated above, files.unshare is in no way an access control feature.
“This makes it obvious that on sharing and then unsharing a file within a channel, it will still remain shared and can be viewed by others on that channel. This is the way it is supposed to be.”
True. Again, this API call is used to stop new comments about a file from appearing in a channel, not to remove the file from a channel. (Deleting is done by the files.delete method) So far no bug, just things working as expected.
“Now, when a file is shared with a Slack user, currently, there is no way to unshare it again from the UI.”
True.
“But, this can be easily done by sending a request to the https://<domain>.slack.com/api/files.unshare end point instead of the https://<domain>.slack.com/api/files.share end point.”
The reporter is proposing that the victim call files.unshare to utilize a “hidden feature”. The reason a user might do this is left to the imagination.
“It is as simple as that.”
There is no instance of files.unshare being called this way in the UI, because that is not what it does. Calling an API method that is not documented is never guaranteed to do what you assume it does.
------------
What Anshuman has created is a scenario where the “victim” must:
1) Use Slack via the Web, Mobile or Desktop Application. 2) Share a file with another user 3) Observe API calls (or read the javascript). 4) Make an assumption about what api/files.unshare is used for. 5) Call that API method directly. (curl, js, whatever..). 6) Expect that the method does what you have guessed. (it doesn’t, because the reporter's guess was incorrect.)
------------
Testing this report involved working with multiple developers to review the nature of files.share and what the impact of this bug would be. At the end of our investigation we replied to the reporter saying that we appreciate his effort, but this is not a vulnerability, because files.unshare is never used in this way. Unfortunately, we then received this message from Anshuman:
“I am giving you a heads up that I will be blogging about this sometime today. Thanks for your time.”
So after hours spent reproducing this and then explaining to Anshuman why it isn't a vulnerability, his reaction was to create a blog post titled “Hidden Feature in Slack leads to Unauthorized Information Leakage of Files”.
I believe that HackerOne is a valuable platform, and outside of this instance our experience has been extremely positive. We will continue to use it and look forward to working with new people.
Btw, I’m not off the hook, because I did something wrong too. I failed to keep Anshuman updated on a second report he filed in December. I absolutely agree that bug bounty participants should receive timely replies to their queries. This oversight is regrettable and this mistake will not be made again. My apologies to Anshuman for not keeping him updated on the status of the bug, which would have allowed proper coordination and disclosure.
Good Hunting,
Ryan
If you accept that Anshuman should have been updated on the 2nd bug and it was only after multiple unanswered requests that he blogged about it (a completely reasonable reaction I'd say), why did you then (I'm guessing you're the same Rhuber as the one commenting on the 3rd bug) say that he had gone against the spirit of the site and had him removed from your bug bounty programme?
1) https://hackerone.com/disclosure-guidelines states:
"If 180 days have elapsed with the Response Team being unable or unwilling to provide a disclosure timeline, the contents of the Bug Report may be publicly disclosed by the Researcher. We believe transparency is in the public's best interest in these extreme cases."
2) He set an arbitrary 90 day disclosure checkpoint.
3) We explicitly asked for more time in dealing with the bug.
4) We had an extremely negative experience with him during his first report. He was unnecessarily adversarial when we patiently explained that he had not found a vulnerability.
---------
Within the HackerOne interface, a "Duplicate" is actually listed as a Closed:Duplicate issue, and doesn't appear in the Open issues tab at all. Perhaps a method of attaching duplicates to the original and allowing communication between all involved is useful? ¯\_(ツ)_/¯
(Note: I'm not related to letschat or sdelements in any way)
In general, Slack is full of security holes, if you look at the number of bounty awarded. On the other hand it's great they do pay people for finding security bugs. I just hope they can tighten up the code and run more security checks before deploying to production...
But there's simply no excuse for the mediocre treatment of what by all accounts appears to be an authentic bug and genuine submission (and with quite a bit of care and energy behind it).