Do they? I mean really? Let's lay aside the fact that it's almost impossible to eyeball security. I just cannot imagine that Google works so differently to every company I've ever worked at that they actually carefully check the stuff they use. Every company I've worked at has had a process for using external code. Some have been stricter than others, none have meaningfully required engineers to make a judgement on the security of code. All of them boil down to speed-running a pointless process.
And that leaves apart the obvious question: I want to use a crate, I check it 'works' for what I need. Some middle manager type has mandated I have to now add it to crate audit (FYI, this is the point I dropped importing the library and just wrote it myself) so I add it to crate audit. Some other poor sap comes along and uses it because I audited it, but he's working on VR goggles and I was working on in-vitro fertilization of cats and he's using a whole set of functions that I didn't even realise were there. When his VR Goggles fertilize his beta testers eyes with cat sperm due to a buffer overflow, which of us get fired?
https://chromium.googlesource.com/chromiumos/third_party/rus...
Seems there are 3-4 folks who helped build this and spent a lot of time doing initial audits; they outsource crypto algorithm audits to specialists.
These checks often don’t attempt to detect actual exploit paths, but for usage of APIs that simply may lead to vulnerability. These checks can only be disabled per file or per symbol and per check by a member of the security team via an allowlist change that has to be in the same commit.
This is not perfect but is by far the most stringent third party policy I’ve seen or worked with. The cost of bringing 3p code into the fold is high.
The flipside of this is that Google tech ends up with an insular and conservative outlook. I’d describe the Googl stack as ‘retro-futuristic’. It is still extremely mature and effective.
But meanwhile in the regular universe, yes it happens the way you say.
Adding a dependency also generates a change list (because dependencies are vendored), and so the normal code review guidelines apply. Both the person adding the dependency and the reviewer should read through the code to make sure that the code is in a good state to be submitted, like any other code (excluding style violations). Small bugs can be fixed with follow up CLs. If the author/reviewer doesn’t understand e.g. the security implications of adding the dependency, they should not submit the CL.
There's a pretty large gap between auditing every line of code and doing nothing. Google does a good job managing external dependencies within their monorepo. There's dedicated tooling, infrastructure, and processes for this.
I set up the environment to disable the normal package repo access. Every third-party package we wanted to use had to be imported into a mirror in our code repo and audited. (THe mirror also preserved multiple versions at once, like the package manager did.) New versions were also audited.
One effect of this was that I immediately incurred a cost when adding a new dependency on some random third party, which hinted at the risk. For example, if a small package had pulled in a dozen other dependencies, which I also would've had to maintain and audit, I would've gone "oh, heck, no!" and considered some other way.
At a later company, in which people had been writing code pulling on the order of a hundred packages from PyPI (and not tracking dep versions), yet it would have to run in production with very-very sensitive customer data... that was interesting. Fortunately, by then, software supply chain attacks were a thing, so at least I had something to point to, that my concern wasn't purely theoretical, but a real active threat.
Now that I have to use Python, JavaScript, and Rust, the cavalier attitudes towards pulling in whatever package some Stack Overflow answer used (and whatever indirect dependencies that package adds) are a source of concern and disappointment. Such are current incentives in many companies. But it's nice to know that some successful companies, like Google, take security and reliability very seriously.
The PM gets promoted for encouraging fast experimentation!
Only 1000 packages but certainly seems they do that for a subset.
Ahh, classic undefined behavior.
My understanding is that this repository, and similar ones from Mozilla and others, says: "I, person X from trustworthy organization Y, have reviewed version 1.0 of crate foo and deemed it legit" (for a definition of trustworthy and legit).
But now how does that help me if I want to be careful about what I depend on and supply-chain attacks? I ask for version 1.0 of crate foo but might get some malicious payload without knowing it.
It's not the worst thing I suppose: #1 is a problem anyway for trusting Google/Mozilla's repo of audits, and #2 can be noticed by others so hard to pull of some supply chain attack that way.
But I would still feel more confident if the audit log contained a copy of the checksum, and ideally itself was signed with author's keys.
cargo-vet went for the other extreme of being super simple. To fill in their review report you don't even need any tooling.
I’m a junior C++ dev that dabbles with rust in my free time, and I always feel a bit nervous when pulling huge dependency trees with tons of crates into projects.
I would assume most places would turn away from the “node.js” way of doing these things and would just write internal versions of things they need.
Again I am junior, so maybe my worries are way over blown.
On the other hand, Python folks and JavaScript users (which make up a lot of emigres to Rust) probably don't care enough about their supply chain. That's how you end up with misspelled packages causing viruses in production and other disasters.
The short answer to this is that it actually depends a lot on what you are doing.
For all the stories about malicious packages on PyPI and whatnot: I can't recall ever seeing a story about "misspelled packages caused us problems in production". Most of these packages have downloads in the low-hundreds at best, and I wouldn't be surprised if the vast majority are from the attackers testing it and bots automatically downloading packages for archiving, analysis, etc. I've come to think it's not as much of a big deal as it's sometimes made out to be.
The closest I've seen is the whole event-stream business where the maintainer transferred it to someone else who promptly inserted some crypto-wallet stealing code, but that's a markedly different scenario (and that also seems quite rare; it was over 4 years ago).
Python and JavaScript people I would imagine find rust annoying since it’s all the niceties they are use to but with a bunch of rules on top.
Personally, C++ aversion to sane dependency management is more about C++'s "I know better than you" culture and legacy cruft (packages are usually managed by the distro, not the language) than actually having any serious security implications.
Still most environments I worked on, always had internal repos for packages, no CI/CD server talks to the outside world and vendoring isn't allowed.
Incorrect assumption, look up the left pad fiasco [1]. Its importance is really a personal opinion; convince nearly always trumps security so if the NPM way allows you to increase sales by ~10% you'll see people continuing to do it.
Google is fairly principled though, all of the 3p code is internally vendored and supposed to be audited by the people pulling in that code/update.
[1]: https://www.google.com/search?q=leftpad+broke+the+internet
On security it's a tradeoff. The open-source version is an easier target for attackers, but might be much more battle-tested and thus more bug-free. Audits are the attempt to have the best of both worlds here, and since they again can be crowd-sourced (with cargo-vet and cargo-cev both working on this) it scales even for companies that aren't Google-sized.
I assume most places don't care.
At least in Rust a large part of the security issues that would be VERY time consuming to audit at scale through your dependency tree (whether internal or public) are covered by the compiler/borrow checker/type-system.
In that sense I would take on an larger amount of dependency in Rust than I would in C++ while sleeping better.
Imagine if all companies and rust developers started sharing what crates they were confident in + what other organizations they trust as well. If you could then create your own set of such companies, and then choose a dependency depth you were willing to go down to, you might be able to quickly vet a number of crates this way, or at least see the weird crates that demand a bit more attention.
If this could be added to whackadep[1] then you'd be able to monitor your Rust repo pretty solidly!
[1]: https://www.cryptologie.net/article/550/supply-chain-attacks...
https://softwareengineeringdaily.com/2023/05/12/cap-theorem-...