I added a cheeky message to my site's .git/ folder if you attempted to view it.
About 2 or 3 months later I started getting "security reports" to the catch all, about an exposed git folder that was leaking my website's secrets.
Apparently because my site didn't return 404, their script assumed i was exposed and they oh so helpfully reported it to me.
Got like 4 or 5 before i decided to make it 404 so they would stop, mainly because i didn't want to bring false positive fatigue on to "security exploit" subject line emails.
I have a feeling CNAs are bringing this kind of low effort zero regard for false positive fatigue bullshit to CVEs. Might as well just rip that bandaid off now and stop trusting anything besides the debian security mailing list.
I've seen reports that easily fail the airtight hatchway [0] tests in a variety of ways. Long cookie expiration? Report. Any cookie doesn't have `Secure`, including something like `accepted_cookie_permissions`? Report. Public access to an Amazon S3 bucket used to serve downloads for an app? Report. WordPress installed? You'll get about 5 reports for things like having the "pingback" feature enabled, having an API on the Internet, and more.
The issue is that CVEs and prior-art bug bounty payments seem "authoritative" and once they exist, they're used as reference material for submitting reports like this. It teaches new security researchers that the wrong things are vulnerabilities, which is just raising a generation of researchers that look for the entirely wrong things.
[0]: https://devblogs.microsoft.com/oldnewthing/20060508-22/?p=31...
No, I'm not joking. That's one of the reports I saw in November. I've also had to triage the claim that our site supposedly has a gazillion *.tar.xz files available at the root. All because the 404 handler for random [non-production relevant] paths is a fixed page with 200 response.
As far as I'm concerned, running a bulk vulnerability scanner against a website and not even checking the results has as much to do with security research as ripping wings off of flies has to do with bioengineering.
Then they beg to have the report closed as "informative". We don't comply unless it really is an honest mistake; I don't like the idea of low-quality reporters evading consequences again and again, sending scattershot bug reports in a desperate attempt to catch a team not paying attention.
[1] Arguably bad, depending on your interests. Because such can be intended by an adversary.
Automated security scanning by people who don’t know what they are doing has become an enormous hassle in so many ways and really is damaging the ability to find and handle true threats.
[0] https://twitter.com/badlogicgames/status/1267850389942042625
Red Hat (my employer), Canonical, and SUSE are also CNAs. I can only speak to ours, but I think our prodsec team does a great job with the resources they've been given. Nobody is perfect, but if you take the time to explain the problem (invalid CVE, wrong severity, bad product assignment, ...) they consistently take the time to understand the issue and will work with whatever other CNA or reporter to fix it. Generally we have a public tracker for unembargoed CVEs, so if it affects us and isn't legitimate or scoped correctly, you might get somewhere by posting there (or the equivalent on Ubuntu/SUSE's tracker).
Perhaps it is just the nature of the open source community Linux distros are a part of, though, that lets them apply it to CVEs as well.
Doesn't help with personal reports though. :-)
Curious, did you get CVE assignments against your personal site? 0.o
Yes, being the discoverer of a CVE is a major resume item. Pen testers who have a CVE to their name can charge more. Companies can charge more for sending them.
There's no good reason that folder should exist except for a joke, so how is this not a helpful message in the vast majority of cases? All lint rules have exceptions, doesn't make them not useful.
There's plenty of cases where a .git directory is just harmless; I've deployed simple static sites by just cloning the repo, and this probably exposed the .git directory. But who cares? There's nothing in there that's secret, and it's just the same as what you would get from the public GitHub repo, so whatever.
That some linting tools warns on this: sure, that's reasonable.
That random bots start emailing me about this without even the slightest scrutiny because it might expose my super-duper secret proprietary code: that's just spam and rude.
I run a small vulnerability disclosure program and receive a ton of it - people clearly run automated scanners, which I presume create automated vulnerability reports, on things that are not even remotely dangerous AND have been specifically ruled out of scope for the program.
It's not helpful, it's time consuming and often people will complain if you don't answer their reports.
There are occasions in which I recognize a CVE as a vulnerability to a legitimate possible threat to an asset. By and large, however, they seem to be marketing material for either organizations offering "protection" or academics seeking publication.
I think like anything else of value, inflation will eat away at the CVE system until something newer and once again effective will come along.
For instance, Linus Torvalds (a very smart person) resisted using something stronger than SHA-1 for Git because he said the purpose of hashes isn't security, it's content-addressable lookup of objects. Which may have been true at the time, but then Git added commit signing. Now if you sign a commit, no matter how strong of an algorithm you use, the commit object references a tree of files via SHA-1. Git is currently undergoing an extremely annoying migration to support new hash algorithms, which could have been avoided.
Also, BLAKE3 is faster than MD5 and also far more secure, so if you're saying "It's okay I'm using MD5 because I want a faster hash and SHA-256 is too slow," there are options other than SHA-256.
If the thing you're trying to hash really really isn't cryptographic at all, you can do a lot better than MD5 in terms of performance by using something like xxHash or MurmurHash.
So, even if it isn't a security vulnerability, using MD5 in a new design today (i.e., where there's no requirement for compatibility with an old system that specified MD5) is a design flaw.
So all the AB tests, percentage rollouts etc. started getting spam PR comments until they were made to turn it back off again.
Frankly if a teammate was writing their own crypto algorithm implemntation in the bog standard web app we working on, that would be more concerning than which RNG they're using.
“Look ma! We’re FIPS compliant now!”
- https://snyk.io/research/zip-slip-vulnerability
But the problem, I think, contains its own solution. The purpose of CVEs is to ensure that we're talking about the same vulnerability when we discuss a vulnerability; to canonicalize well-known vulnerabilities. It's not to create a reliable feed of all vulnerabilities, and certainly not as an awards system for soi-disant vulnerability researchers.
If we stopped asking so much from CVEs, stopped paying attention to resume and product claims of CVEs generated (or detected, or scanned for, or whatever), and stopped trying to build services that monitor CVEs, we might see a lot less bogus data. And, either way, the bogus data would probably matter less.
(Don't get me started on CVSS).
however many institutions want to outsource responsibility for their own high-stakes decisions to the peer review system. whether it's citing peer-reviewed articles to justify policy, or counting publications to make big hiring decisions.
It introduces very strong incentives to game the system -- now getting any paper published in a decent venue is very high-stakes, and peer review just isn't meant for that -- it can't really be made robust enough.
i don't know what the solution is in situations like this, other than what you propose -- get the outside entities to take responsibility for making their own judgments. but that's more expensive and risky for them, so why would they do it?
It feels kind of like a public good problem but I don't know what kind exactly. The problem isn't that people are overusing a public good, but that just by using it at all they introduce distorting incentives which ruins it.
The problem is the misconception ordinary users have about what CVEs are; the abuses are just a symptom.
It simply says nothing about whether a vuln is real, relevant or significant.
CNA system actually is better since it at least puts some filter on it - before it was Wild West, anybody could assign CVE to any issue in any product without any feedback from anybody knowledgeable in the code base and assign any severity they liked, which led to wildly misleading reports. I think CNA at least provides some sourcing information and order to it.
Their other GitHub work is following tutorials, labs and courses.
This repository does no longer exists.
For example CVE-2018-11116: Someone configures an ACL to allow everything and then code executing is possible like expected: https://forum.openwrt.org/t/rpcd-vulnerability-reported-on-v...
and CVE-2019-15513: The bug was fixed in OpenWrt 15.05.1 in 2015: https://lists.openwrt.org/pipermail/openwrt-devel/2019-Novem...
For both CVEs we were not informed, the first one someone asked in the OpenWrt forum about the details of this CVE and we were not even aware that there is one. The second one I saw in a public presentation from a security company mentioning 4 CVEs on OpenWrt and I was only aware of 3.
When we or a real security researcher request a CVE for a real problem as an organization it often takes weeks till we get it, we released some security updates without a CVE, because we didn't want to wait so long. It would also be nice to update them later to contain a link to our detailed security report.
From your point of view, I'm sure that's probably quite frustrating. From my point of view (as a user), that's completely absurd, should never happen, and is a huge deficiency in the CVE program.
Fortunately, it's possible for the OpenWRT project to become a CNA [0] and gain the ability to assign CVE IDs themselves.
See "Types" under "Key to CNA Roles, Types, and Countries" [1]:
> Vendors and Projects - assigns CVE IDs for vulnerabilities found in their own products and projects.
--
[0]: https://cve.mitre.org/cve/cna.html#become_a_cna
[1]: https://cve.mitre.org/cve/request_id.html#key_cna_roles_and_...
Our bug bounty clearly outlines that chat, Jira, Confluence, our website - all out-of-bounds. Almost all of our reports are on those properties.
In come new CNAs, scale the efforts through trusted teams, which makes sense. The mitre team can only do so much on their own.
Unfortunately I don’t think anyone will be as strict and passionate about getting CVEs done right, like the original mitre team has.
Here is to hoping they can revoke cna status from teams who consistently do not meet a quality bar.
I wonder if maybe, instead of trying to fix CVEs, we could try to think about creating alternatives? I know some companies already use their own identifiers (e.g. Samsung with SVE), so perhaps a big group of respected companies can come together to create a new unified identifier? Just an idea though.
Ultimately, because there are now a few hundred [0] CNAs [1] which are "authorized to assign CVE IDs" and, AFAICT, there is nothing in the "CNA rules" [2] that requires them to (attempt to) verify the (alleged) vulnerabilities -- although, in at least some instances, I assume it simply wouldn't be possible for them to do so.
--
> 7.1 What Is a Vulnerability?
> The CVE Program does not adhere to a strict definition of a vulnerability. For the most part, CNAs are left to their own discretion to determine whether something is a vulnerability. [3]
Officially, a "vulnerability" is:
> A flaw in a software, firmware, hardware, or service component resulting from a weakness that can be exploited, causing a negative impact to the confidentiality, integrity, or availability of an impacted component or components.
Fortunately, there is a "Process to Correct Assignment Issues or Update CVE Entries" [5]. In instances of multiple, "duplicate" or "invalid" CVEs, I can see how this might be both frustrating and time-consuming for software developers, though.
--
[0]: https://cve.mitre.org/cve/request_id.html
[1]: https://cve.mitre.org/cve/cna.html
[2]: https://cve.mitre.org/cve/cna/rules.html
[3]: https://cve.mitre.org/cve/cna/rules.html#section_7-1_what_is...
[4]: https://cve.mitre.org/about/terminology.html#vulnerability
[5]: https://cve.mitre.org/cve/cna/rules.html#appendix_c_process_...
Every large security organization requires scanning tooling like Coalfire, Checkmarx, Fortify and Nessus, but I've rarely seen them used in an actionable way. Good security teams come up with their own (effective) ways of tracking new security incidents or vastly filtering the output of these tools.
The current state of CVEs and CVE scanning is that you'll have to wrangle with bullshit security reports if you run any nontrivial software. This is especially the case if you have significant third party JavaScript libraries or images. And unfortunately you can't just literally ignore it, because infrequently one of those red rows in the dashboard will actually represent something like Heartbleed.
Especially if you have customers who outsourced their infosec to the lowest bidder who insist every BS CVE is critical and must be fixed.
We also use tools like Dependabot to keep an eye out for vulnerabilities in our dependencies, and update them to patched versions. This is genuinely useful and a worthwhile timesaver on more complex projects.
It's easy to be cynical about automated scanning (and pen-testing for that matter) and, although it's often needed as a checkbox for certification, it can certainly add value to your development process.
It's a bit naughty how "security researchers" don't appear to make a good effort to communicate upstream.
And the fact that Jerry has problems reaching out to NVD or Mitre is worrying.
And this issue in my docker-adminer: https://github.com/TimWolla/docker-adminer/issues/89