- Rust has no overhead from a "framework"
- Rust programs start up quickly
- The rust ecosystem makes it very easy to compile a command-line tool without lots of fluff
- The strict nature of the language helps guide the programmer to write bug-free code.
In short: There's a lot of good reasons to choose Rust that have little to do with the presence or absence of a garbage collector.
I think having a working garbage collection at the application layer is very useful; even if it, at a minimum, makes Rust easier to learn. I do worry about 3rd party libraries using garbage collectors, because they (garbage collectors) tend to impose a lot of requirements, which is why a garbage collector usually is tightly integrated into the language.
Rust's predominant feature, the one that brings most of its safety and runtime guarantees, is borrow checking. There are things I love about Rust besides that, but the safety from borrow checking (and everything the borrow checker makes me do) is why I like programming in rust. Now, when I program elsewhere, I'm constantly checking ownership "in my head", which I think is a good thing.
The heavyweight framework (and startup cost) that comes with Java and C# makes them challenging for widely-adopted lightweight command-line tools. (Although I love C# as a language, I find the Rust toolchain much simpler and easier to work with than modern dotnet.)
Building C (and C++) is often a nightmare.
This isn't even a new concept in Rust. Rust already has a well accepted RC<T> type for reference counted pointers. From a usage perspective, GC<T> seems to fit in the same pattern.
Add opt-in development compilation JIT for quick iteration and you don't need any other language. (Except for user scripts where needed.)
I don't know what you had to go through before reaching rust's secure haven, but what you just said is true for the vast majority of compiled languages, which are legions.
Is there a real distinction between any of those?
If you ban doing that, then you’re basically back to manual memory management.
But in practice it's more like there's an overhead for "hello world" but it's a fixed overhead. So it's really only a problem where you have lots of binaries, e.g. for coreutils. The solution there is a multi-call binary like Busybox that switches on argv[0].
C programs often seem small because you don't see the size of their dependencies directly, but they obviously still take up disk space. In some cases they can be shared but actually the amount of disk space this saves is not very big except for things like libc (which Rust dynamically links) and maybe big libraries like Qt, GTK, X11.
In my personal projects with Rust, this ends up being very nice because it makes packaging easier. However, I've never been in a situation where binary size matters like in the embedded space, for example.
Rust isn't the only language with this approach, Go is another.
They may be larger because they are doing more work, depends on the program.
But no they don’t statically compile everything.
I was previously excited about this project which proposed to support arena allocation in the language in a more fundamental way: https://www.sophiajt.com/search-for-easier-safe-systems-prog...
That effort was focused primarily on learnability and teachability, but it seems like more fundamental arena support could help even for experienced devs if it made patterns like linked lists fundamentally easier to work with.
Yes, because it defeats borrow checking.
Unsafe Rust, used directly, works too
Asyc/await really desperately needs a garbage collector. (See this talk from Rustconf 2025: https://youtu.be/zrv5Cy1R7r4?si=lfTGLdJOGw81bvpu and this blog:https://rfd.shared.oxide.computer/rfd/400)
Rust that uses standard techniques for asynchronous code, or is synchronous, does not. Async/await sucks all the oxygen from asynchronous Rust
Async/await Rust is a different language, probably more popular, and worth pursuing (for somebody, not me) it already has a runtime and the dreadful hacks like (pin)[https://doc.rust-lang.org/std/pin/index.html] that are due to the lack of a garbage collector
What a good idea
I'm curious how you got to "async Rust needs a [tracing] garbage collector" in particular. While it's true that a lot of the issues here are downstream of futures being passive (which in turn is downstream of wanting async to work on embedded), I'm not sure active futures need a tracing GC. Seems to me like Arc or even a borrow-based approach would work, as long as you can guarantee that the future is dropped before the scope exits (which admittedly isn't possible in safe Rust today [0]).
The difficulties with async/await seem to me to be with the fact that code execution starts and stops using "mysterious magic", and it is very hard for the compiler to know what is in, and what is out, of scope.
I am by no means an expert on async/await, but I have programmed asynchronously for decades. I tried using async/await in Rust, Typescript and Dart. In Typescript and Dart I just forget about memory and I pretend I am programming synchronously. Managed memory, runtimes, money in the bank, who is complaining? Not me.
\digression{start} This is where the first problem I had with async/await cropped up. I do not like things that are one thing, and pretend to be another - personally or professionally - and async/await is all about (it seems to me) making asynchronous programming look synchronous. Not only do I not get the point - why? is asynchronous programming hard? - but I find it offensive. That is a personal quibble and not one I expect many others to find convincing I guess I am complaining.... \digression{end}
In Rust I swiftly found myself jumping through hoops, and having to add lots and lots of "magic incantations" none of which I needed in the other languages. It has been a while, and I have blotted out the details.
Having to keep a piece of memory in scope when the scope itself is not in my control made me dizzy. I have not gone back and used async/await but I have done a lot of asynchronous rust programming since, and I will be doing more.
My push for Rust to bifurcate and become two languages is because async/await has sucked up all the oxygen. Definitely from asynchronous Rust programming, but it has wrecked the culture generally. The first thing I do when I evaluate a new crate is to look for "tokio" in the dependencies - and two out of three times I find it. People are using async/await by default.
That is OK, for another language. But Rust, as it stands, is the wrong choice for most of those things. I am using it for real time audio processing and it is the right choice for that. But (e.g) for the IoT lighting controller [tapo](https://github.com/mihai-dinculescu/tapo) it really is not.
I am resigned to my Cassandra role here. People like your good self (much respect for your fascinating talk, much envy for your exciting job) are going to keep trying to make it work. I think it will fail. It is too hard to manage memory like Rust does with a borrow checker with a runtime that inserts and runs code outside the programmer's control. There is a conflict there, and a lot of water is going under the bridge and money down the drain before people agree with me and do what I say...
Either that or I will be proved wrong
Lastly I have to head off one of the most common, and disturbing, counter (non) arguments: I absolutely do not accept that "so many smart people are using it it must be OK". Many smart people do all sorts of crazy things. I am old enough to have done some really crazy things that I do not like to recall, and anyway, explain Windows - smart people doing stupid things if ever
C# / dotnet don't have this issue. The few times I've needed a raw pointer to an object, first I had to pin it, and then I had to make sure that I kept a live reference to the object while native code had its pointer. This is "easier done than said" because most of the time it's passing strings to native APIs, where the memory isn't retained outside of the function call, and there is always a live reference to the string on the stack.
That being said, because GC (in this implementation) is opt-in, I probably wouldn't mix GC and pointers. It's probably easier to drop the requirement to get a pointer to a GC<T> instead of trying to work around such a narrow use case.
creating pointers without provenance is safe, so the GC can’t assume that a program won’t have them also be sound. This always be an issue.
> if a machine word's integer value, when considered as a pointer, falls within a GCed block of memory, then that block itself is considered reachable (and is transitively scanned). Since a conservative GC cannot know if a word is really a pointer, or is a random sequence of bits that happens to be the same as a valid pointer, this over-approximates the live set
Suppose I allocate two blocks of memory, convert their pointers to integers, then store the values `x` and `x^y`. At this point, no machine word points to the second allocation, and so the GC would consider the second allocation to be unreachable. However, the value `y` could be computed as `x ^ (x^y)`, converted back to a pointer, and accessed. Therefore, their reachability analysis would under-approximate the live set.
If pointers and integers can be freely converted to each other, then the GC would need to consider not just the integers that currently exist, but also every integer that could be produced from the integers that currently exist.
You can only freely convert integers to pointers with "exposed provenance" in Rust which is currently unstable.
https://doc.rust-lang.org/std/ptr/index.html#exposed-provena...
I find the idea of provenance a bit abstract so it's a lot easier to think about a concrete pointer system that has "real" provenance: CHERI. In CHERI all pointers are capabilities with a "valid" tag bit (it's out-of-band so you can't just set it to 1 arbitrarily). As soon as you start doing raw bit manipulation of the address the tag is cleared and then it can be no longer used as a pointer. So this problem doesn't exist on CHERI.
Also the problem of mistaking integers as pointers when scanning doesn't exist either - you can instead just search for memory where the tag bit is set.
What compiler writers realized is that pointers are actually not integers, even though we optimize them down to be integers. There's extra information in them we're forgetting to materialize in code, so-called "pointer provenance", that optimizers are implicitly using when they make certain obvious pointer optimizations. This would include the original block of memory or local variable you got the pointer from as well as the size of that data.
For normal pointer operations, including casting them to integers, this has no bearing on the meaning of the program. Pointers can lower to integers. But that doesn't mean constructing a new pointer from an integer alone is a sound operation. That is to say, in your example, recovering the integer portion of y and casting it to a pointer shouldn't be allowed.
There are two ways in which the casting of integers to pointers can be made a sound operation. The first would be to have the programmer provide a suitably valid pointer with the same or greater provenance as the one that provided the address. The other, which C/C++ went with for legacy reasons, is to say that pointers that are cast to integers become 'exposed' in such a way that casting the same integer back to a pointer successfully recovers the provenance.
If you're wondering, Rust supports both methods of sound int-to-pointer casts. The former is uninteresting for your example[0], but the latter would work. The way that 'exposed provenance' would lower to a GC system would be to have the GC keep a list of permanently rooted objects that have had their pointers cast to integers, and thus can never be collected by the system. Obviously, making pointer-to-integer casts leak every allocation they touch is a Very Bad Idea, but so is XORing pointers.
Ironically, if Alloy had done what other Rust GCs do - i.e. have a dedicated Collect trait - you could store x and x^y in a single newtype that transparently recovers y and tells the GC to traverse it. This is the sort of contrived scenario where insisting on API changes to provide a precise collector actually gets what a conservative collector would miss.
[0] If you're wondering what situations in which "cast from pointer and int to another pointer" would be necessary, consider how NaN-boxing or tagged pointers in JavaScript interpreters might be made sound.
Rust APIs are largely built around references. If you were to put a Vec<T> (dynamic array) into a pointerless Gc<T>, you would be almost entirely unable to access its contents. The only way to access it would be swap it with an empty Vec, access it, then swap it back a-la Cell. You wouldn't even be able to clone the Vec without storing a dummy version in its place during the call.
https://doc.rust-lang.org/stable/std/cell/struct.Cell.html#m...
In your case, do you need to get a pointer to a GC<T> and use it within Rust? I haven't worked with Rust at that level yet, so perhaps I'm ignorant of a more common use case.
I remain bitterly disappointed that so much of the industry is so ignorant of the advances of the past 20 years. It's like it's 1950 and people are still debating whether their cloth and wood airplanes should be biplanes or triplanes.
Passing memory into code that uses a different memory manager is always a case where automatic memory management shouldn't be used. IE, when I'm using a 3rd party library in a different language, I don't expect it to know enough about my language's memory model to be able to effectively clean up pointers that I pass to it.
If you want this, you might just…want a different language? Which is fine and good! Putting a GC on Rust feels like putting 4WD tyres on a Ferrari sports car and towing a caravan with it. You could (maybe) but it feels like using the wrong tool for the job.
Now, upstreaming OxCaml's unboxed types and stack allocations? That might actually take longer than adding a GC to Rust.
For the rest you'd still use non-GC rust.
Where GC becomes necessary is the case where even static analysis cannot really mitigate the issue of having multiple, possibly cyclical references to the same data. This is actually quite common in some problem domains, but it's not quite as simple as linked lists.
It's useful to have when you have complex graph structures. Or when implementing language runtimes. I've written a bit about these types of use cases in https://manishearth.github.io/blog/2021/04/05/a-tour-of-safe...
And there's a huge benefit in being able to narrowly use a GC. GCs can be useful in gamedev, but it's a terrible tradeoff to need to use a GC'd language to get them, because then everything is GCd. library-level GC lets you GC the handful of things that need to be GCd, while the bulk of your program uses normal, efficient memory management.
For a variety of reasons I don't think this particular approach is a good fit for a JS engine, but it's still very good to see people chipping away at the design space.
https://doc.rust-lang.org/std/boxed/struct.Box.html#method.l...
The fact that any felt it nessasary to add a "leak" function to the standard library should tell you something about how easy it is to accidentally leak memory.
(It's safe to leak a promise, so there's no way for the borrow checker to prove an async function actually returned before control flow is handed back to the caller.)
Rust has loads of other advantages over C++, though.
I think the pursuit of safety is a good goal and I could see myself opting into garbage collections for certain tasks.
Slows down every access to objects as reference counts must be maintained
Something weird that I never bothered with to enable circular references
Definitely not every access. Between an “increase refcount” and an “decrease refcount” you can access an object as many times as you want.
Also:
- static analysis can remove increase/decrease pairs.
- Swift structs are value types, and not reference counted. That means Swift code can have fewer reference-counted objects than similar Java code has garbage-collected objects.
It does perform slower than GC-ed languages or languages such as C and rust, but is easier to write [1] than rust and C and needs less memory than GC-ed languages.
[1] The latest Swift is a lot more complex than the original Swift, but high-level code still can be reasonably easy.
- the syntax is hella ugly
- GC needs some compiler machinery, like precise GC root tracking with stack maps, space for tracking visted objects, type infos, read/write barriers etc. I don't know how would you retrofit this into Rust without doing heavy duty brain surgery on the compiler. You can do conservative GC without that, but that's kinda lame.
If memory management is already resolved with the borrow checker rules, then what case can make you want a GC in a Rust program?
Even in standard Rust, this only applies to a subset of memory management. That’s why Rust supports reference counting, for example, which is an alternative to borrow checking. But one could make the case that automatic garbage collection was developed specifically to overcome the problems with reference counting. Given that context, GC in Rust makes perfect sense.
Which does at least generally work. It's pretty rare to be bitten by these problems, like, less-than-once-per-career levels of rare (if you honor the advice above)... but certainly not unheard of, definitely not a zero base rate.