This is gonna ruffle some feathers, but it's only a matter of time until it'll happen on the Rust ecosystem which loves to depend on a billion subpackages, and it won't be fault of the language itself.
The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
Like another commenter said, I do think it's partially just because dependency management is so easy in Rust compared to e.g. C or C++, but I also suspect that it has to do with the size of the standard library. Rust and JS are both famous for having minimal standard libraries, and what do you know, they tend to have crazy-deep dependency graphs. On the other hand, Python is famous for being "batteries included", and if you look at Python project dependency graphs, they're much less crazy than JS or Rust. E.g. even a higher-level framework like FastAPI, that itself depends on lower-level frameworks, has only a dozen or so dependencies. A Python app that I maintain for work, which has over 20 top-level dependencies, only expands to ~100 once those 20 are fully resolved. I really think a lot of it comes down to the standard library backstopping the most common things that everybody needs.
So maybe it would improve the situation to just expand the standard library a bit? Maybe this would be hiding the problem more than solving it, since all that code would still have to be maintained and would still be vulnerable to getting pwned, but other languages manage somehow.
On the topics it does cover, Rust's stdlib offers a lot. At least on the same level as Python, at times surpassing it. But because the stdlib isn't versioned it stays away from everything that isn't considered "settled", especially in matters where the best interface isn't clear yet. So no http library, no date handling, no helpers for writing macros, etc.
You can absolutely write pretty substantial zero-dependency rust if you stay away from the network and async
Whether that's a good tradeoff is an open question. None of the options look really great
I honestly feel like that's one of Rust's biggest failings. In my ideal world libstd would be versioned, and done in such a way that different dependencies could call different versions of libstd, and all (sound/secure) versions would always be provided. E.g. reserve the "std" module prefix (and "core", and "alloc"), have `cargo new` default to adding the current std version in `cargo.toml`, have the prelude import that current std version, and make the module name explicitly versioned a la `std1::fs::File`, `std2::fs::File`. Then you'd be able to type `use std1::fs::File` like normal, but if you wanted a different version you could explicitly qualify it or add a different `use` statement. And older libraries would be using older versions, so no conflicts.
Curious, do you have specific examples of that?
That's some "small print" right there.
My personal experience (YMMV): Rust code takes 2x or 3x longer to write than what came before it (C in my case), but in the end you usually get something much more likely to work, so overall it's kind of a wash, and the product you get is better for customers - you basically front load the cost of development.
This is terrible for people working in commercial projects that are obsessed with time to market.
Rust developers on commercial projects are under incredible schedule pressure from day 0, where they are compared to expectations from their previous projects, and are strongly motivated to pull in anything and everything they can to save time, because re-rolling anything themselves is so damn expensive.
I'm all in favor of embiggening the Rust stdlib, but Rust and JS aren't remotely in the same ballpark when it comes to stdlib size. Rust's stdlib is decidedly not minimal; it's narrow, but very deep for what it provides.
Doing dev in a VM can help, but isn’t totally foolproof.
It would be lovely if Python shipped with even more things built in. I’d like cryptography, tabulate/rich, and some more featureful datetime bells and whistles a la arrow. And of course the reason why requests is so popular is that it does actually have a few more things and ergonomic improvements over the builtin HTTP machinery.
Something like a Debian Project model would have been cool: third party projects get adopted into the main software product by a sworn-in project member who who acts as quality control / a release manager. Each piece of software stays up to date but also doesn’t just get its main branch upstreamed directly onto everyone’s laps without a second pair of eyes going over what changed. The downside is it slows everything down, but that’s a side-effect of, or rather a synonym for stability, which is the problem we have with npm. (This looks sort of like what HelixGuard do, in the original article, though I’ve not heard of them before today.)
I don't think languages should try to include _everything_ in their stdlib, and indeed trying to do so tends to result in a lot of legacy cruft clogging up the stdlib. But I think there's a sweet spot between having a _very narrow_ stdlib and having to depend on 160 different 3rd-party packages just to make a HTTP request, and having a stdlib with 10 different ways of doing everything because it took a bunch of tries to get it right. (cf. PHP and hacks like `mysql_real_escape_string`, for example.)
Maybe Python also has a historical advantage here. Since the Internet was still pretty nascent when Python got its start, it wasn't the default solution any time you needed a bit of code to solve a well-known problem (I imagine, at least; I was barely alive at that point). So Python could afford to wait and see what would actually make good additions to the stdlib before implementing them.
Compare to Rust which _immediately_ had to run gauntles like "what to do about async", with thousands of people clamoring for a solution _right now_ because they wanted to do async Rust. I can definitely sympathize with Rust's leadership wanted to do the absolute minimum required for async support while they waited for the paradigm to stabilize. And even so, they still get a lot of flak for the design being rushed, e.g. with `Pin`.
So it's obviously a difficult balance to strike, and maybe the solution isn't as simple as "do more in the stdlib". But I'd be curious to see it tried, at least.
Most rust programmers are mediocre at best and really need the memory safety training wheels that rust provides. Years of nodejs mindrot has somehow made pulling into random dependencies irregular release schedules to become the norm for these people. They'll just shrug it off come up with some "security initiative* and continue the madness
Saying this as someone who is cautiously optimistic about Rust for my own work.
Since your comment starts with commentary on crates.io, I'll note that this has never been possible crates.io.
> Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.
I don't think this has ever been true. AFAIK crates.io has always prevented registering two different crates whose names differ only in the use of dashes vs underscores.
> package namespaces
See https://github.com/rust-lang/rust/issues/122349
> proof of ownership
See https://github.com/rust-lang/rfcs/pull/3724 and https://blog.rust-lang.org/2025/07/11/crates-io-development-...
https://rust-lang.github.io/rfcs/0940-hyphens-considered-har...
Was from 2015 and the other discussions I remember were around default style and that cargo already blocks a crate when normalized name is equal.
The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.
This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?
In Go, access to the os and exec require certain imports, imports that must occur at the beginning of the file, this helps when scanning for malicious code. Compare this JavaScript where one could require("child_process") or import() at any time.
Personally, I started to vendor my dependencies using go mod vendor and diff after dependency updates. In the end, you are responsible for the effect of your dependencies.
I don’t believe I can do the same with Rust.
> In Go you know exactly what code you’re building thanks to gosum
Cargo.lock
> just create vendor dirs before and after updating packages and diff them [...] I don’t believe I can do the same with Rust.
cargo vendor
There is no airtight technical solution, for any language, for preventing malicious dependencies making it into your application. You can have manual or automated review using heuristics but things will still slip through. Malicious code doesn't necessarily look obvious, like decoding some base64 and piping it into bash, it can be an extremely subtle vulnerability sprinkled in that nobody will find until it's too late.
RE dependency cooldowns I'm hoping Go will get support for this. There's a project called Athens for running your own Go module proxy - maybe it could be implemented there.
You can't, really, aside from full on code audits. By definition, if you trust a maintainer and they get compromised, you get compromised too.
Requiring GPG signing of releases (even by just git commit signing) would help but that's more work for people to distribute their stuff, and inevitably someone will make insecure but convenient way to automate that away from the developer
Easy reason. The target for malware injections is almost always cryptocurrency wallets and cloud credentials (again, mostly to mine cryptocurrencies). And the utter utter majority of stuff interacting with crypto and cloud, combined with a lot of inexperienced juniors who likely won't have the skill to spot they got compromised, is written in NodeJS.
As for Windows vs the other OS's, yes even the Windows NT family grew out of DOS and Win9x and tried to maintain compatiblity for users over security up until it became untenable. So yes, the base _was_ bad when Windows was dominant but it's far less bad today (why people target high value targets via NPM,etc since it's an easier entry-point).
Android/iOS is young enough that they did have plenty of hindsight when it comes to security and could make better decisions (Remember that MS tried to move to UWP/Appx distribution but the ecosystem was too reliant on newer features for it to displace the regular ecosystem).
Remember that we've had plenty of annoyed discourse about "Apple locking down computers" here and on other tech forums when they've pushed notarization.
I guess my point is that, people love to bash on MS but at the same time complain about how security is affecting their "freedoms" when it comes to other systems (and partly MS), MS is better at the basics today than they were 20-25 years ago and we should be happy about that.
Preventing the user from installing something that they want to install is another issue completely. I'm hesitant to call it exactly security, though I agree that it falls under the auspices of security.
This is just plainly false in case of Python.
> also the the NPM ecosystem also allowed by default for a lot of post-install actions since they wanted to enable a smooth experience with compiling and installing native modules (Not entirely sure how Cargo and PIP handles native library dependencies).
Rust is already "native" so Cargo doesn't need to do anything.
Python has the logic to do native builds baked into pip and friends, so a Python package can just specify what to build in the manifest. But it also allows for precompiled wheels, and most popular packages do them for all major OSes, so users rarely need to actually build stuff in practice.
That and the package runtime runs with all the same privileges and capabilities as the thing you're building, which is pretty insane when you think about it. Why should npm know anything outside of the project root even exists, or be given the full set of environment variables without so much as a deny list, let alone an allow list? Of course if such restrictions are available, why limit them to npm?
The real problem is that the security model hasn't moved substantially since 1970. We already have all the tools to make things better, but they're still unportable and cumbersome to use, so hardly anything does.
> security model
yep, some kind of seccomp or other kind of permission system for modules would help a lot. (eg. if the 3rd party library is parsing something and its API only requires a Buffer as input and returns some object then it could be marked "pure", if it supports logging then that could be also specified, and so on)
Still, I think the "allow-scripts" section or whatever it's called should be named "allow-unrestricted-access-to-everything". Or maybe just stick "dangerously-" in front, I dunno, and drop it when the mechanism is capable of fine-grained privileges.
When I download a C project, I know that it only depends on my system libraries - which I trust because I trust my distro. Rust seems to expect me to take a leap in the dark, trusting hundreds of packagers and their developers. That might be fine if you're already familiar with the Rust ecosystem, but for someone who just wants to try out a new program - it's intimidating.
Though I will say, even as someone who works at a company that sells Linux distributions (SUSE), while the fact we have an additional review step is nice, I think the actual auditing you get in practice is quite minimal.
For instance, quite recently[1] the Debian package for a StarDict plugin was configured automatically upload all text selected in X11 to some Chinese servers if you installed it. This is the kind of thing you'd hope distro maintainers to catch.
Though, having build scripts be executed in distribution infrastructure and shipped to everyone mitigates the risk of targeted and "dumb" attacks. C build scripts can attack your system just as easily as Rust or JavaScript ones can (in fact it's probably even easier -- look at how the xz backdoor took advantage of the inscrutability of autoconf).
[1]: https://www.openwall.com/lists/oss-security/2025/08/04/1
Furthermore, for the purposes of this discussion, it really doesn't matter what code there is in the C project. What's there has been put there by the people who run the project. If they are malicious, then at least I know who they are. With Rust, I'm downloading and compiling code from many, many third parties. I have no idea who they are. The potential for one of them to be malicious is much, much higher.
Any time I ever did the equivalent with NPM/node world it was basically unusable or completely impractical
In .NET you can cover a lot of use cases simply using Microsoft libraries and even a lot of OSS not directly a part of Microsoft org maintained by Microsoft employees.
But realistically, I think the sum total of compromises via package managers attacks is much smaller than the sum total of compromises caused by people rolling their own libraries in C and C++.
It's hard to separate from C/C++'s lack of memory safety, which causes a lot of attacks, but the fact that code reuse is harder is a real source of vulnerabilities.
Maybe if you're Firefox/Chromium, and you have a huge team and invest massive efforts to be safe, you're better off with the low-dependency model. But for the median project? Rolling your own is much more dangerous than NPM/Cargo.
> The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise.
There was no decision in case of C/C++; it was just not a thing languages had at the time so the language itself (especially C) isn't written in a way to accommodate it nicely
> Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.
Any code you download and compile is running code on dev machine; and Go does have tools to do that in compile process too.
I do however like the by default namespacing by domain, there is no central repository to compromise, and forks of any defunct libs are easier to manage.
I really agree, and I feel like it's a culture difference. Javascript was (and remains) an appealing programming language for tinkerers and hobbyists, people who don't really have a lot of engineering experience. Node and npm rose to prominence as a wild west with lots of new developers unfamiliar with good practices, stuck with a programming environment that had few "batteries included," and at a time when supply chain attacks weren't yet on everybody's minds. The barriers to entry were low and, well, the ecosystem sort of reflected that. You can't wash that legacy away overnight.
Rust in contrast attracts a different audience because of the language's own design objectives.
Obviously none of this makes it immune, and you can YOLO install random dependencies in any programming language, but I don't think any language is ever going to suffer from this in quite the same way and to the same extent that JS has simply due to when and how the ecosystem evolved.
And really, even JS today is not JS of yesteryear. Sure there are lots of bad actors and these bad NPM packages sneak in, but also... how widely are all of them used? The maturation of and standardization on certain "batteries included" frameworks rather than ad hoc piecing stuff together has reduced the liklihood of going astray.
- Packages are always namespaced, so typosquating is harder - Registries like Sonatype require you to validate your domain - Versions are usually locked by default
My professional life has been tied to JVM languages, though, so I might be a bit biased.
I get that there are some issues with the model, especially when it comes to eviction, but it has been "good enough" for me.
Curious on what other people think about it.
1) No one forces you to use dependencies with large number of transitive dependencies. For example, feel free to use `ureq` instead of `reqwest` pulling the async kitchen sink with it. If you see an unnecessary dependency, you could also ask maintainers to potentially remove it.
2) Are you sure that your project is as simple as you think?
3) What matters is not number of dependencies, but number of groups who maintain them.
On the last point, if your dependency tree has 20 dependencies maintained by the Rust lang team (such as `serde` or `libc`), your supply chain risks are not multiplied by 20, they stay at one and almost the same as using just `std`.
Rust has already had a supply chain attack propagating via build.rs some years ago. It was noticed quickly, so staying pinned to the oldest thing that worked and had no cve pop in cargo audit is a decent strategy. The remaining risk is that some more niche dependency you use is and always has been compromised.
The safest code is the code that is not run. There is no lack of attacks targeting C/C++ code, and odin is just a hobby language for now.
What is worse between writing potentially vulnerable code yourself and having too many dependencies.
Finding vulnerabilities and writing exploits is costly, and hackers will most likely target popular libraries over your particular software, much higher impact, and it pays better. Dependencies also tend to do more than you need, increasing the attack surface.
So your C code may be worse in theory, but it is a smaller, thus harder to hit target. It is probably an advantage against undiscriminating attacks like bots and a downside against targeted attacks by motivated groups.
It's also very confusing (and I think those attack vectors benefit exactly from that), since you have a dependency but the dep itself dependent on another dep version.
Building basic CapacitorJS / Svelte app as an example, results many deps.
It might be a newbie question, but, Is there any solution or workflow where you don't end up with this dependency hell?
Maybe I'm being a bit trite but the world of JavaScript is not some mysterious place separate from all other web programming, you can make bad decisions on either side of the stack. These comments always read like devs suddenly realizing the world of user interactions is more complicated and has more edge cases than they think.
Being incredibly strict with TS compiler and linter helps a bit.
But the basic takeover... no, it usually won't affect any Debian style distro package, due to the release process.
Things like cargo-vet help as does enforcing non-token auth, scanning and required cooldown periods.
The alternative that C/C++/Java end up with is that each and every project brings in their own Util, StringUtil, Helper or whatever class that acts as a "de-facto" standard library. I personally had the misfortune of having to deal with MySQL [1], Commons [2], Spring [3] and indirectly also ATG's [4] variants. One particularly unpleasant project I came across utilized all four of them, on top of the project's own "Utils" class that got copy-and-paste'd from the last project and extended for this project's needs.
And of course each of these Utils classes has their own semantics, their own methods, their own edge cases and, for the "organically grown" domestic class that barely had tests, bugs.
So it's either a billion "small gear" packages with dependency hell and supply chain issues, or it's an amalgamation of many many different "big gear" libraries that make updating them truly a hell on its own.
[1] https://jar-download.com/artifacts/mysql/mysql-connector-jav...
[2] https://commons.apache.org/proper/commons-lang/apidocs/org/a...
[3] https://docs.spring.io/spring-framework/docs/current/javadoc...
[4] https://docs.oracle.com/cd/E55783_02/Platform.11-2/apidoc/at...
And what is wrong with writing your own util library that fits your use case anyway? In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency. No one sane will add a third-party git submodule, wire it to the main Makefile, just to left-pad a string.
Yeah, that's why I said that this is the other end of the pendulum.
> In C/C++ world, if it takes less than a couple hours to write, you might as well do it yourself rather than introduce a new dependency.
Oh I'm aware of that. My point still stands - that comes at a serious maintenance cost as well, and I'd also say a safety cost because you're probably not wrapping your homebrew StringUtils with a bunch of sanity checks and asserts, meaning there will be an opportunity for someone looking for a cheap source of exploits.
If your Rust software observes a big enough chunk of the computer fever dream you are likely to end up with 2-3 digit amount of Rust dependencies, but they are probably all going to be high profile ones (tokio, anyhow, reqwest, the hyper crates, ...), instead of niche ones that never make it into any operating system.
This is not a silver bullet of course, but there seems to be an inverse correlation between "is part of any operating system dependency tree" and "gets compromised in an npm-like incident".
What would actually stop this is writing compilers and build systems in a way that isolates builds from one another. It's kind of stupid that all a compiler really needs is an input file, a list of dependencies, and an output file. Yet they all make it easy to root around, replicate and exfiltrate. It can be both convenient and not suffer from these style of attacks.
Must read: https://wiki.alopex.li/LetsBeRealAboutDependencies
TL;DR: ditch crates.io and copy Go with decentralized packages based directly on and an extended standard library.
Centralized package managers only add a layer of obfuscation that attackers can use to their advantage.
On the other hand, C / C++ style dependency management is even worse than Rust's... Both in terms of development velocity and dependencies that never get updated.
Don't make me tap the sign: https://news.ycombinator.com/item?id=41727085#41727410
> Centralized package managers only add a layer of obfuscation that attackers can use to their advantage.
They add a layer of convenience. C/C++ are missing that convenience because they aren't as composable and have a long tail of pre-package manager projects.
Java didn't start with packages, but today we have packages. Same with JS, etc.
My real worry, for myself re the parent comment is, it's just a web frontend. There are a million other ways to develop it. Sober, cold risk assessment is: should we, or should we have, and should anyone else, choose something npm-based for new development?
Ie not a question about potential risk for other technologies, but a question about risk and impact for this specific technology.
Every time I fire up "cmake" I chant a little spell that protects me from the goblins that live on the other side of FetchContent to promise to the Gods of the Repo that I will, eventually, review everything to make sure I'm not shipping poop nuggets .. just as soon as I get the build done, tested .. and shipped, of course .. but I never, ever do.
`#![no_std]`
(These are arguably good things in other contexts.)
I installed the package, obviously I intend to run it. How does getting pwned once I run it manually differ from getting pwned once I install it? I’m still getting pwned
I understand that there's been some course correction recently (zero dependency and minimal dependency libs), but there are still many devs who think that the only answer to their problem is another package, or that they have to split a perfectly fine package into five more. You don't find this pattern of behavior outside of Node.
The medium is the message. If a language creates a very convenient package manager that completely eliminates the friction of sharing code, practically any permutation of code will be shared as a library. As productivity is the most important metric for most companies, devs will prefer the conveniently-shared third-party library instead of implementing something from scratch. And this is the result.
I don't believe you can have packaging convenience and avoiding dependency hell. You need some amount of friction.
It’s essentially remote execution a la carte.
If anything, blind reliance on LLMs will make this problem much worse.
Now will you trust that AI didn't include its own set of security issues and will you have the ability to review so much code?
Libraries will be providing raw tools like - Sockets, Regex Engine, Cryptography, Syscalls, specific file format libraries
LLMs will be building the next layer.
I have build successful running projects now in Erlang, Scheme, Rust - I know the basic syntax of two of those but I couldn't write my deployed software in any of them in the couple of hours of prompting.
The scheme it had to do a lot of code from first principles and warned me how laborious it would be - "I don't care, you are doing it."
I have tools now I could not have imagined I could build in a reasonable time.
Totally 100% agree, though tools like cargo tree make it more of a tractable problem, and running vendored dependencies is first class at least.
The one I am genuinely most concerned of is Golang. The way Dependencies are handled leaves much to be desired, I'm really surprised that there haven't been issues honestly.