You can’t assess for risk without knowing anything about the environment, compensating controls and business logic.
Veracode is a check-the-box enterprise compliance tool that reduces engineering productivity and corporate profitability.
Yes, vulnerability management is an important problem but Veracode is not the solution not even close.
My recommendation to my manager and others was that for the price being asked for most of these tools, we could instead hire two or more devs to do nothing but code review and manually search for vulnerabilities and do a better job. Instead they went with Veracode, and the first thing I got got after we implemented it was three emails for false positives that didn't properly identify the code file and line number where the supposed vulnerabilities occurred, meaning I had to waste several hours communicating with the team that ran the scan to figure out where this supposedly occured.
I specifically remember two of the false positives were about my use of Serilog, saying my code was vulnerable to log injection. Even though Serilog isn't vulnerable to that if you use structured logging (which my code was). I even did several log injections and verified that they didn't actually work.
Edit: Looking at the reports it generated, I see it also failed to detect usage of the actually highly vulnerable BinaryFormatter class which I have seen used several times by our offshore teams. That's something that could be detected with a simple string search and they didn't detect it.
I would disagree that you can replace scanning tools with just human review though. You need both.
Static analysis and software composition analysis are good at flagging possible problems, but there's a lot of noise. You ideally want an application security engineer to be reviewing the code and scan results. And also conducting threat modelling with the devs, so you prevent problems even earlier.
Anecdotally, when I have done triage on such analysis, perhaps 5-10% of things the tools marked as 'critical' (i.e. potentially critical) were flaws which might have had some actual impact and need to be fixed. So when some vendor says in an article like this "The study found nearly two-thirds (63%) of the applications scanned had flaws in the first-party code", keep that in mind - they're generally treating all detections as real, but it's rarely the case.
But on the other hand, it may well be the case that it's simpler to have a process to just clean up all suspicious places in the code; just as the simplest way to avoid use-after-free errors is to use a garbage-collected language instead of just trying to ensure that every single memory allocation in C is correctly implemented.
You buy it to have it as an insurance when your software gets hacked due to poor software engineering and you leak identities of millions of people. You can claim that “we had a process in place for secure software development” no matter if the tool does something useful or not. CISO and CEO do not get fired, users cannot sure, life goes on.
To use such compliance solution may be also top-down requirement like in the case of SolarWinds after they caused the hacking of the half of the US government: https://investors.solarwinds.com/news/news-details/2023/Sola...
I'm still wary from the time I tried to make a proof of concept with vue and a graph viewer, made an npm project with 5 dependencies and 2 development dependencies, and discovered I have just pulled 1400 indirect dependencies.
Any kind of dependency management on a project like this will fail. It simply can not be done.
The answer is devops:
You build your system in a way that it's nearly effortless to test and deploy, then you can upgrade your dependency tree continually with minimal effort.