Because it's not a silver bullet. That safety comes at a cost; Rust is much more difficult to learn than C or Zig and the compilation time for code with equivalent semantics is an order of magnitude greater. It has also added a great deal of toolchain complexity to projects like the Linux kernel.
People have decided that the pros outweigh the cons in those particular cases, but those cons exist nonetheless.
(fwiw, I teach undergrad systems programming in C, I use Python at the startup, and I use a mix of C/C++/Rust in research.)
I would personally much prefer to use Rust for code exposed to external untrusted input than to use C. I have substantially more confidence that I would not add exploitable bugs given the same time budget.
C and C++ are incredibly subtle languages. But you can get a lot of code written before you run into certain foot guns in C and C++. This gives those language a more enjoyable on-ramp for beginners.
In comparison, rust is a wall. The compiler just won’t compile your code at all if you do anything wrong. This makes the act of learning rust much more painful. But once you’ve learned rust, it’s a much smoother experience. There’s far fewer ways for your programs to surprise you at runtime.
1) Design in correspondence with AI. Let it criticise your ideas, give you suggestions on tools/libraries/techniques, and have concepts and syntax explained to you. Stay aware that these models are sycophantic yes-machines.
2) Implement yourself.
3) Debug in collaboration with AI. If you ask a question like "I'm getting [error], what are the most likely reasons for this happening?", you can save a lot of time finding the issue. Just make sure to also research why it is happening and how to solve it independently.
4) Let AI criticise your final result and let it offer suggestions on what to improve. Judge these critically yourself.
There is some worth in spending hours trying to fix a bug you don't understand, it builds resilience, helps you get familiar with a lot of language topics, and you probably won't make the same mistake again. But the above approach is a pretty good compromise of letting AI help where it excels, while still keeping enough control to actually learn something yourself.
I believe that Rust is the language benefiting the most from agentic AI, because the compiler is such a strong gate-keeper, and the documentation of almost all aspects of the language is comprehensive and clear. The biggest pain points of Rust are also reduced by AI: Front-loaded learning curve is softened, refactoring is something gen AI is actually decent at, and long compile times can be spent productively by already planning out the next steps.
If a language is hard to write at first, it’s always hard to write. The saving grace of C++ is that one mustn’t use the overcomplicated functional aspects, template meta-programming, etc. Through some amazing circumstances, all of the above (or their equivalents) + async is exactly what idiomatic Rust code has become.
Inside Rust there is a not so ugly language that is struggling to come to light and it is being blocked at every step.
Of course adding an additional set of tooling complicates an environment. I'm sure that was the case for Google in adding Rust to Android as well. And yet - it seems to have proved worth it. And I suspect that in the long term it will prove likewise for Linux, because Linux shares the same requirements and has a similar threat model it needs to guard against.
Suppose we have a team of experts busily analyzing every single state of the code. They are reading/valgrinding/fuzzing/etc.-- in real time as the intermediate developer writes code.
Each time the developer tries to compile, the team quickly votes either to a) remain silent and leave the dev alone, or b) stop compilation because someone thinks they've discovered an invalid read/write or some other big no-no that the compiler will not catch (but the Rust compiler would catch).
If they choose b, the experts stop for a bit and discuss the clearest way to communicate the hidden bug. Then they have a quick conversation with the intermediate developer. Suggestions are made, and the whole process repeats.
Is this process substantially faster than just learning Rust?
Edit: clarification
It is misguided to say that recursive data structures should be easy to write. They are difficult to reason about and the Rust compiler is right to point this out. CS is not engineering, you should be writing those trees with a pencil on a piece of paper as Dijkstra intended, not in C.
And is your hand-written C implementation going to be safe and correct. You didn't mention any kind of locking or atomic operation, so is it going to unexpectedly break in a multithreaded environment?
The concepts of Rust are definitely more complicated, but in practice it just means that C makes it easier to shoot yourself in the foot. It's easy, but is that really the most important thing here?
In the same vein, driving on a modern busy road requires you to know about lanes, speed limits, various signs, traffic lights, rules of turning and merging, etc, etc. A road without all of that, where a steering wheel plus two pedals suffice, of course still allows you to drive, and drive fast, but it requires much more attention from a driver; many driver's mistakes are noticed later, and lead to more dangerous accidents.
This assumes a self-balancing binary tree must have nodes with parent pointers. Without those you don't need reference counting and without that you don't need `RefCell` either.
It does look like a silver bullet, actually. In the context of software engineering, "silver bullet" inevitably leads to Fred Brooks:
'"No Silver Bullet—Essence and Accident in Software Engineering" is a widely discussed paper on software engineering written by Turing Award winner Fred Brooks in 1986. Brooks argues that "there is no single development, in either technology or management technique, which by itself promises even one order of magnitude [tenfold] improvement within a decade in productivity, in reliability, in simplicity."
Reducing memory-safety vulnerabilities by 5000x compared to the prior approach is not just a silver bullet, it's an arsenal of silver bullets.
> the compilation time for code with equivalent semantics is an order of magnitude greater
The time it takes to write and run the comprehensive tests for C and Zig code to demonstrate anything even approximately in the ballpark of what Rust gives you for free is a multiple orders of magnitude greater than whatever time you spent waiting for the Rust compiler. Why care about the time it takes to compile trivially incorrect code, rather than caring about the total time it takes to produce reliable software, which is demonstrably lower for memory-safe languages like Rust?*