The diaspora
Mozilla laid off much of the Rust team in August 2020
- Hacker News discussion https://news.ycombinator.com/item?id=24143819
~everybody landed at a big tech company
Result: Rust knowledge well distributed around industry
No one company dominates
- In contrast to Java, Swift, Go, C#, Dart
Strong sense of collaboration & common mission
- You can really feel this at conferencesIn C++ if we destroy this Goose twice, probably everything catches on fire, whoops.
In Rust or Austral if we attempt to destroy this Goose twice the code doesn't compile. No executable for you until you stop destroying the Goose twice.
In C++ or Rust if we forget to destroy the Goose, the Goose-destroying code never runs. Hope that wasn't important.
In Austral if we forget to destroy the Goose, the code doesn't compile. No executable for you until you ensure the Goose is destroyed.
Languages struggle to win on usability alone, because outside of passion projects it's really hard to justify a rewrite of working software to get the same product, only with neater code.
But if the software also has a significant risk of being exploited, or is chronically unstable, or it's slow and making it multi-core risks making it unstable, then Rust has a stronger selling point. Management won't sign off a rewrite because sum types are so cool, but may sign off an investment into making their product faster and safer.
To be clear I still think these criticisms are valid. However after using the language in production, I've come to realize these problems in practice are manageable. The langauge is nice, decently well supported, has a relatively rich ecosystem.
Every programming language/ecosystem is flawed in some way, and I think as an experienced dev you learn to deal with this.
It having an actual functioning npm-like package management and build system makes making multiplatform software trivial. Which is something about C++ that kills my desire to deal with that language on a voluntary basis.
The ecosystem is full of people who try to do their best and produce efficient code, and try to understand the underlying problem and the machine. It feels like it still has a culture of technical excellence, while most libraries seem to be also well organized and documented.
This is in contrast to JS people, who often try to throw together something as fast as possible, and then market the shit out of it to win internet points, or Java/C# people who overcomplicate and obfuscate code by sticking to these weird OOP design pattern principles where every solution needs to be smeared across 5 classes and design patterns.
But besides that I am totally on your side, usability is great.
"Every (lost) week spent refactoring to satisfy lifetime constraints..." Not wrong.
I love Rust, I'm a fan of writing it and I love the tooling. And I love to see it's (hopefully) getting more popular. Despite this, I'm not sure if "won" is the right word because to my very uneducated eyes there is still considerable amount of Rust not succeeding. Admittedly I don't write so much Rust (I should do more!) but when I do it always baffles me how tons of the libraries recommended online are ghost town. There are some really useful Rust libraries out there that weren't maintained for many years. It still feels like Rust ecosystem is not quite there to be called a "successful" language. Am I wrong? This is really not a criticism of Rust per se, I'm curious about the answer myself. I want to dedicate so much more time and resources on Rust, but I'm worries 5 to 10 years from now everything will be unmaintained. E.g. Haskell had a much more vibrant community before Rust came and decent amount of Haskellers moved to Rust.
I have this all the time. Any new rust project and you have to wade through a bunch of once-great crates.
But that's because rust is new. The initial surge over produced solutions to, say binary serialization, and under produced, say, good geodesy libraries. And many many were abandoned. Go to any of the "are we X yet" sites and you'll see many crates that are clearly not finished or advancing which were recently considered SoA.
The problem is that sometimes library may need to pin a dependency version. Or a dependency was released with a newer major version update and do not back-port security fixes to older versions.
So one cannot just use an old library. Its dependency list must carefully considered.
Now this problem exists with any package management system. But in Rust it is more visible as the language still evolves quickly with non-trivial new features released often.
Then the library authors may want to use newer language features on their API. Then they simply bump the library mayor version and maintain only that. So an old dependencies will not get updates.
I think this is a peculiarity of Clojure. Clojure is optimized for simplicity and as the saying goes: "It seems that perfection is attained not when there is nothing more to add, but when there is nothing more to remove."
I don't use Clojure these days. Maybe I should revisit the language. It has some really nice ideas.
I'm building a structural biology ecosystem in rust, split out into several libs, and a GUI program. Molecular dynamics, file format interop, API interaction, 3D viewer/dashboard/manipulation etc. I also do embedded in rust for work and personal projects. In both of these domains (High-performance scientific programming/GUI+3D, and embedded), I have had to write my own tooling. Nascent tools exist, but they are of too poor quality to use; e.g. easier to rewrite than attempt to work around the limitations of.
I'm at a loss. When I talk to people online about embedded rust, I find people discussing design patterns I think are high-friction (Async and typestates), and few people describe projects.
I think part of the problem is that it has acquired a group of people who design APIs and write libs without applying these libs to practical problems, which would expose their shortcomings.
At the same time, I've been working on an embedded Rust project and trying Embassy for the first time and it is _amazing_.
First of all I've had good logging and debug support with probe-rs. Secondly, being able to do ergonomic DMA transfers with async/await syntax feels like cheating. The core goes to sleep until your transfer finishes and an interrupt fires which wakes up the core and continues your async state machine, which the compiler built for you from your async code.
Distinct tasks can be split up into separate async functions, the type system models ownership of singleton hardware well (you can't use Pin 8 for your SPI bus there, you already "moved" it to your I2S peripheral over there!), and it all comes together with pretty minimal RAM usage, efficient use of the hardware, and low power consumption if you're mostly waiting on data transfers.
I'd be happy to talk more about practical problems if you want to get specific.
I think rust will get those libraries, but it takes time. Rust is still young compared to languages with a large amount of useful libraries. The boost project in c++ started in the 90s for example. It just takes time.
Oh and for the first 3 the relevant docs for using it in this way are out of date and you will need to look at source code.
I would say it won like it won the lottery, not like it won the tournament.
As professional developers, however, I think there is also the job market to consider. It obviously depends on where you live in the world, but in my area there have been 0 Rust jobs for 5 years. There are plenty of C++ jobs, there are even a few Zig jobs once in a while. Go on the other hand has seen an explosive growth, though probably as a replacement for C# and Java rather than for C++.
These days I write almost everything in Rust and there are only two outlier situations;
- Environments where I can't use Rust effectively. Web (wasm is great but it's not there yet), Apple, Cloudflare workers/Cloudfront edge functions.
- Use cases where there aren't good tools for Rust (like web scraping, pdf manipulation, that sort of thing)
For organizations that have regulatory, safety, strong security etc concerns (a market Rust is a natural fit for) this could be critically important. But even more so I would just use it. I am tired of my `cargo tree` rapidly turning into an exploding maze. I don't want 3 different MD5 or rand or cryptography or http packages used in one static linkage, and I don't want them bringing in an exploding maze of transitive dependencies of their own.
In this stage, the unique standout features are given a lot of limelight, and people are a bit more forgiving with usability failings and library shortcomings, as that can be fixed later.
If they fail, they'll be relegated to the 'perpetually misunderstood' pile, like Haskell has.
Node/Ts has made the transition a while ago, Go's ride was a bit more bumpy (most ppl agree the language is good, but channels are a bit of an acquired taste).
I think Rust is in the process of making the jump. I think language devs and library maintainers are a bit more responsive to the borrow checker usability gripes (rather than the knee-jerk 'you just don't get it' reaction) and the ecosystem expands in both depth and breadth. Imo the question of Rust making it is more of a 'when' than 'if', but it's not there yet.
Rust keeps growing exponentially, but by Sturgeon's law for every one surviving library you're always going to have 9 crap projects that aren't going to make it. Unfortunately, crates.io sorts by keyword relevance, not by quality or freshness of the library, so whatever you search for, you're going to see 90% of crap.
There was always a chunk of libraries destined for the dustbin, but it wasn't obvious in the early days when all Rust libraries were new. But now Rust has survived long enough to outlive waves of early adopter libraries, and grow pile of obviously dead libraries. The ecosystem is so large that the old part is large too.
https://lib.rs/stats#crate-time-in-dev
Rust is now mainstream, so it's not a dozen libraries made by dedicated early adopters any more. People learning Rust publish their hello world toys, class assignments, their first voxel renderer they call a game engine. Startups drop dozens of libraries for their "ecosystem". Rust also lived through the peak of the cryptocurrency hype, so all these moon-going coins, smart contract VMs and NFTs exchanges now have a graveyard on crates.io.
When you run into dead libraries in Python or Java you don't think these languages are dying, it's just the particular libraries that didn't make it. JavaScript has fads and mass extinctions, and keeps going strong. Rust is old enough that it too has dead libraries, and big enough that it has both a large collection of decade-old libraries, as well as fads and fashions that come and go.
I would like to see support for more compilers (https://rust-gcc.github.io/), more interoperability with C/C++, better support for cross-compilation. Maybe less reliance on crates.io, static linking, and permissive licenses.
Still, I see Rust as the natural progression from C++. It has enough momentum to flatten all competitors (Carbon, Zig, Nim, Go) except scripting languages
I don't; Rust has its niche but currently can't replace C++ everywhere.
From what I'm aware of, Rust has poor ergonomics for programs that have non-hierarchical ownership model (ie. not representable by trees), for example retained mode GUIs, game engines, intrusive lists in general, non-owning pointers of subobjects part of the same forever-lived singleton, etc.
> Go
To displace Go you must also displace Kubernetes and its ecosystem (unlikely, k8s is such a convenient tool), or have k8s move away from Go (not gonna happen considering who developed both)
In NPM you often get thousands of dependencies for things that should be simple like Vue.
Another factor is that projects are often split into many crates for compile time & modularity reasons, e.g. Gix is dozens of crates.
The "two versions of a crate" is actually a great thing. In other ecosystems like Python you would be simply unable to build the project at all because you can only have one version of any dependency.
The one thing that I think is a big problem is where you have splits in the ecosystem, e.g. anyhow vs snafu, or Tokio Vs Smol. Once your project gets to a certain size you end up including every vaguely popular error handling crate. I think that's one of the big downsides of a small standard library which I haven't heard anyone mention.
Because we won the argument now that doesn't work. When you say "it's a pipe dream" you're either ignorant or a liar, so, in 2025 WG21 didn't say this can't be realised - when they were shown proposals to do exactly this - they said well, we think we can achieve the same goals via a different route which suits us better, we just need more time.
Whether you believe that or not is a different conversation, but Rust won the argument.
Perhaps there could be a future where the compiler/checker will be able to integrate more closely with whichever agent is attempting to write Rust - more closely than the current paradigm, where a hapless Claude repeatedly bashes its head into the borrow checker to no avail.
Everyone makes similar statements about AI, Ai is currently bad at this or that. I find it quite good to write Rust. Can you give a concrete example of what AI failed to write for you
This stuck with me. I hope we can say the same thing about Rust 10, 20 or 30 years from now.
Then one day python was suddenly top of the charts, probably post python 3 (old timers hated it but it really improved ergonomics) but not immediately so. Rather later on when they fixed the performance loss from 2->3 (which looking back as a primarily Rust coder now, was a hilarious argument that community had internally because even my worst, most quickly cobbled together rust code beats some of my best python code at the same task performance wise).
https://docs.google.com/presentation/d/1SoDsm_m_pb_gS6Y98Hgh...
Hello! Big fan of your UI research