Looked into it and am equally surprised to find that others, like Microsoft [0] also have such low bounties for these types of attacks.
While providing such an exploit to the affected company has value beyond the bounty (potential job offers, media exposure, credibility, ethical considerations, etc.), weighing that up against life-changing money really makes it hard to fault those who take the more lucrative route of selling these to the highest bidder, whoever that may be.
Seriously, Alphabet and Co. can afford more, especially considering any such exploit would most certainly hit their bottom line/stock far beyond a few 100k.
https://github.com/mdowd79/presentations/blob/main/bluehat20...
Unfortunately the talk wasn’t recorded but he did do a follow up interview on a podcast called Security, Cryptography, Whatever
This Google/Alphabet VRP change I think is pretty much just about website vulnerabilities.
Disclosure: I work at Google but not on the VRP.
But the second reason, quite prosaically, is that individual bugs aren't worth that much to a business. You can't build your security program on the expectation that you could reliably squash all bugs. You also invest in being able to detect and contain breaches - and if you do that, even the best exploit is a crapshoot for the attackers. Maybe they get in, lose access five minutes later, and are out a million bucks.
In other words, the point of paying for bugs is to raise the bar, and to get some independent validation of your security practices - not to make attacks impossible.
Finally, there's a retention element to it. Paradoxically, you might be worse off if your bounty program instantly turns your best bug hunters into millionaires. If they no longer need to make rent, they might decide that they like farming more.
Some take a bureaucratic approach but they are labeled as such on the bug bounty marketplaces
Web 2.0 organizations aren’t just competing with the gray market, they’re competing with Web 3.0’s licit market, while 3.0 is competing with immediate weaponization which is far easier to monetize
I wonder if it's because Google was hit with more issues because they started doing cloud apps a bit before microsoft, amazon, etc.
The example that comes to mind is Gmail and it's rapid growth and issues it learned to sort out while it was becoming workspace.
No cloud is perfect, however I have heard different clouds have different maturity levels in certain areas of their security.
Something to always think about when using the cloud, which is someone else's computer.
If something goes wrong at the Cloud provider you'd have to deal with securing it some how moving forward anyways, so why not when selecting a cloud and trying to be hybrid cloud, or cloud agnostic.
And from there it follows that maybe the market rate isn't really that high, zerodium pays, maybe 2x what Google does for similar vulnerabilities, which is more but not a ton more.
Receiving 75k from Google versus a few hundred thousand from a less reputable source is a different scenario compared to getting a few hundred thousand from Google versus slightly more from those same sources. In the former, I'd have a hard time not going for the large yet morally dubious payday. With the latter, I feel like most, myself included, would stick with Google
You can't just do a bank transfer, so you're probably getting paid on crypto. Converting the crypto to fiat will probably be a pain. All the reputable exchanges have KYC requirements. You'd have to explain how you came to acquire so much crypto.
I guess you could get paid in a suitcase of cash, that has it's own headaches.
Personally, I'm just picturing so many headaches that even if I wasn't morally against selling it to the highest bidder, it doesn't feel worth it. Selling to some other "proper" corporate entity or a government agency seems reasonable, but are they offering more than Google?
Just with a quick check I found Zerodium, which claims to offer bounties up to $2.5 million. They say their clients are "government institutions (mainly from Europe and North America) in need of advanced zero-day exploits and cybersecurity capabilities."
This assumption seems misplaced. Can you give an example of a security exploit seriously impacting the finances of a publicly traded company?
This is also on the front page https://news.ycombinator.com/item?id=40944505 and I really doubt AT&T stock will suffer significantly. Maybe they'll miss Q3 targets, but they'll be fine. All the execs will get their bonuses.
Anyways, here a two examples of the top of my head:
Of course, the big one, Equifax, which had a significant drop in the week after the announcement. It took roughly two-years for the stock to trade at pre-breach levels [0], likely in part due to their less than stellar handling of the aftermath, though I'd still consider that directly linked to the breach.
More to the point, there was Yahoo, which I wanted to mention because its impact was more clearly measurable. What was weird about that one is that their case centered around a belated (by two years) announcement of a breach they faced between 2013 and 2014. That did impact their stock, but more importantly, it's the reason for a 350 million USD reduction in the acquisition price Verizon had to pay for Yahoo. Verizon agreed to cover half the cost of non-SEC government investigations and third-party lawsuits (which I feel also would fall under hitting their "bottom line"), while Yahoo covered the other half and any liability from shareholder lawsuits or SEC investigations. That 350 million USD plus fines to me is the clearest number one can put on a breach and I feel it shows that, whatever one thinks is fair compensation for reporting 0-days, 75k is far removed from that.
So yeah, there have been cases where a security exploit seriously impacted the finances of a publicly traded company and keep in mind, I only stuck with actual reductions in their stock value/acquisition price.
[0] https://www.marketwatch.com/investing/stock/efx
[1] https://www.geekwire.com/2017/verizon-pays-350m-less-yahoo-f...
I'm wondering if bounty programs effectively form a low-paid gig economy for programmers.
The number of clueless individuals running these bug bounty programs is not worth it. The only reason most people do it is for the "fame" within the security community; or that occasional researcher that was just bored.
Even worse, some companies (like South Korean companies) will not even pay out if you are not a citizen of the country. Makes no sense to me.
But, and this is the important part, in this case there is zero moral quandary, whereas when selling an 0day there is a significant moral question depending on who you’re selling to.
Some people do make it their full time gig, but it’s fairly unpredictable is the issue; much like “gig work,” you’re not guaranteed to find a vuln, and the timing between findings is going to be inconsistent at best.
Plus, less risk of waking up and finding out you've been sanctioned by OFAC or something like that.
I'm trying to figure out the labor-side economics of this.
Generally the supply side is getting a massive discount on these vulnerabilities compared to their potential costs. Although perhaps the discount applies is appropriate considering how few vulnerabilities do result in observable expense.
https://github.com/mdowd79/presentations/blob/main/bluehat20...
The real value of bug bounties is for less sensitive products that aren't really big targets for nation states. Startups with products that haven't seen wide deployment in sensitive industries, for example.
There are many people who are perfectly happy getting "rep" and lower payouts for finding flaws in even the highly targeted applications, thankfully.
Most certainly, or those who can't get jobs because of their record but know how to code.
Bug finding requires theory building and guesswork. You're working blind. Reporting requires detailed technical writing and POC implementation. It's time consuming, so unless you're able to crank out findings or submit the same issue to multiple companies in parallel, the hourly rate will be low. Companies are flooded with low quality reports, so you really need to make the issue crystal clear.
Private bug bounties are better because there's usually obvious issues, but you're racing to be first to report.
Contract security work is much more predictable. Companies who "haven't thought about security before" are desperate for help. You can get more money building a system inventory, recommending updates for EOL systems, finding leaked passwords, and turning on firewalls. Basically engineering teams that know they have issues, but need someone external to make it clear to management that they need to invest in security. I've never failed to find at least one way to get system root or cloud admin rights on those contracts.
https://shubs.io/high-frequency-security-bug-hunting-120-day...
Fortunately this is not a problem for me, because I couldn't find anything even if I wanted.
Instead of spending the time and money to build secure systems up front, they will offload this to "bounty programs" where the time spent finding vulnerabilities will not match the reward. It's like an unpaid internship, but worse since you are competing with people of varying cost of living requirements.
Yea, a potential $150K bounty sounds is a shit ton of money for a person in a third world country. But for anybody else (given the same time spent finding the vulnerability), there is no financial motivation. Only "fame" via disclosure reports in the security community.
This is the equivalent of a customer asking a professional photographer who is new on the scene to do their photography for free in exchange for "exposure". No, you aren't innovative. You are a cheap asshole.
As it is now, only the largest tech companies with the strongest security records are actually running good bug bounty programs. They have excellent, well-paid security teams and they put systems in place to incentivize all of their employees to write secure code. But, they know that (1) mistakes can still happen, (2) clever vulnerabilities can be discovered that get around code that was previously thought to be following all best practices, and finally they understand very well that (3) if they don't pay, others will.
Unfortunately it's the companies that need it most - like AT&T and Experian - that have the worst track record with rewarding third-party security researchers.
Defense is very hard. Offense, by comparison, is much easier. An attacker has to win once, and then they’re in.
A defender has to win every time, which is much much harder, if not impossible.
Should be $10m honestly.
Not actually, I am not a law breaker;)
Bugs are found all the time. Sharing a bug you found is not a crime, but I imagine they can always get you on tax fraud.