Nah, in the model you're imagining now the program takes infinite time. But we can observe that our garbage free Rust program doesn't take infinite time, it's actually very fast. That's because your model is of a GC system - where ensuring no garbage really would need infinite time and a language without GC isn't a GC with zero garbage, it's entirely different, that's the whole point.
More generally, a GC-less solution may be more CPU intensive, or it may not, and although there are rules of thumb it's difficult to have any general conclusions. If you work on Java this is irrelevant, your language requires a GC, so this isn't even a question and thus isn't worth mentioning in a talk about, again, the Java GC.
> A claim that the market is irrational may well be true
Which makes your entire thrust stupid. You depend upon the perfect market fallacy for your point there, the claim that if this was a good idea people would necessarily already be doing it - once you accept that's a fallacy you have nothing.
Wat? Where did you get that?
> That's because your model is of a GC system - where ensuring no garbage really would need infinite time and a language without GC isn't a GC with zero garbage, it's entirely different, that's the whole point.
Except it's not different, and working "without garbag"e doesn't take infinite time even with a garbage collector. For example, a reference counting GC also has zero garbage (in fact, that's the GC Rust uses to manage Rc/Arc) and doesn't take infinite time. It does, however, sacrifice CPU for lower RAM footprint. Have you studied the theory of memory management? The CPU/memory tradeoff exists regardless of what algorithm you use for memory management, it's just that some algorithms allow you to control that tradeoff within the algorithms and others require you to commit to a tradeoff upfront.
For example, using an arena in C++/Rust/Zig is exactly about exploiting that tradeoff (Zig in particular is very big on arenas) to reduce the CPU spent on memory management by using more RAM: the arena isn't cleared until the transaction is done, which isn't minimal in terms of RAM, but requires less CPU. Oh, and if you use an arena (or even a pool), you have garbage, BTW. Garbage means an object that isn't used by the program, but whose memory cannot yet be reused for the next allocation.
If you think low-level languages don't have garbage, then you haven't done enough low-level programming (and learn about arenas; they're great). There are many pros and many cons to the Rust approach, and it sure is a good tradeoff in some situations, but the reason the biggest Rust zealots - those who believe it's universally superior - are those who haven't done much low-level programming and don't understand the tradeoffs it involves. It's also them who think that the reason those of us who were there first picked up C++ and later abandoned it for some use-cases did so only because it wasn't memory-safe. That was one reason, but there were many others, at least as equally decisive. Rust fixes some of C++'s shortcomings but certainly not all; Zig fixes others (those that happen to be more important to me) but certainly not all. They're both useful, each in their own way, and neither comes close to being "the right way to program". Knowledgeable, experienced people don't even claim that, and are careful to point out that they may genuinely believe some universal superiority but that they don't actually have the proof.
Whether you use a GC (tracing-moving as in Java, refcounting as in Rust's Rc or Swift, or tracing-sweeping as in Go), use arenas, or manually do malloc/free, the same principles and tradeoffs of memory management apply. That's because abstract models of computation - Turing machines, the lambda calculus, or the game of life, pick your favourte - have infinite memory, but real machines don't, which means we have to reuse memory, and doing that requires computational work. That's what memory management means. Some algorithms, like Rust's primitive refcounting GC, aim to reuse memory very aggressively, which means they do some work (such as updating free-lists) as soon as some object is unused so that its memory can be reused immediately, while other approaches postpone the reuse of memory to do less work. That's what tracing collectors or arenas do, and that's precisely why we who do low-level programming like arenas so much.
Anyway, the point of the talk is this: the CPU/memory tradeoff is, of course, inherent to all approaches of memory management (though not necessarily in every particular use case), and so it's worth thinking about the fact that the minimal amount of RAM per core on most machines these days is high. This applies to everything. Then he explains that the trancing-moving collectors allow you to turn the tradeoff dial - within limits - within the same algorithm. Does it mean tracing-moving collection is always the best choice? No. But do approaches that strive to minimise footprint somehow evade the CPU/memory tradeoff? Absolutely not.
A lesson that a low-level programmer may take away from that could be something like, maybe I should rely on RC less and, instead, try to use larger arenas if I can.
> Which makes your entire thrust stupid. You depend upon the perfect market fallacy for your point there
Except I don't depend on it. I say it's evidence that cannot be ignored. The market can be wrong, but you cannot assume it is.
The talk you're so excited about actually shows this asymptote. In reality a GC doesn't actually want zero garbage because we're trading away RAM to get better performance. So they don't go there, but it ought to have pulled you up short when you thought you could apply this understanding to an entirely unrelated paradigm.
Hence the V2 comparison. So long as you're thinking about those solid fuel toy rockets the V2 makes no sense, such a thing can't possibly work. But of course the V2 wasn't a solid fuel rocket at all and it works just fine. Rust isn't a garbage collected language and it works just fine.
GC is very good for not caring about who owns anything. This can lead to amusing design goofs (e.g. the for-each bug in C# until C# 5) but it also frees programmers who might otherwise spend all their time worrying about ownership to do something useful which is great. However it really isn't the panacea you seem to have imagined though at least it is more widely applicable than arenas.
> it's worth thinking about the fact that the minimal amount of RAM per core on most machines these days is high.
Like I said, this means you are less likely to value the non-GC approach when you're small. You can put twice as much RAM in the server for $50 so you do that, you do not hire a Rust programmer to make the software fit.
But at scale the other side matters - not the minimum but the maximum, and so whereas doubling from 16GB RAM to 32GB RAM was very cheap, doubling from 16 servers to 32 servers because they're full means paying twice as much, both as capital expenditure and in many cases operationally.
> I say it's evidence
I didn't see any evidence. Where is the evidence? All I saw was the usual perfect market stuff where you claim that if it worked then they'd have already completed it by some unspecified prior time, in contrast to the fact I mentioned that they've hired people to do it. I think facts are evidence and the perfect market fallacy is just a fallacy.
Oh, I see the confusion. That asymptote is for the hypothetical case where the allocation rate grows to infinity (i.e. remains constant per core and we add more cores) while the heap remains constant. Yes, with an allocation rate growing to infinity, the cost of memory management (using any algorithm) also grows to infinity. That it's so obvious was his point showing why certain benchmarks don't make sense as they increase the allocation rate but keep the heap constant.
> So they don't go there, but it ought to have pulled you up short when you thought you could apply this understanding to an entirely unrelated paradigm.
I'm sorry, but I don't think you understand the theory of memory management. You obviously run into these problems even in C. If you haven't then you haven't been doing that kind of programming long enough. Some results are just true for any kind of memory management, and have nothing to do with a particular algorithm. It's like how in computational complexity theory, certain problems have a minimal cost regardless of the algorithm chosen.
> But at scale the other side matters - not the minimum but the maximum, and so whereas doubling from 16GB RAM to 32GB RAM was very cheap, doubling from 16 servers to 32 servers because they're full means paying twice as much, both as capital expenditure and in many cases operationally.
I've been working on servers in C++ for over 20 years, and I know the tradeoffs, and when doing this kind of programming seeing CPU exhausted before RAM is very common. I'm not saying there are never any other situations, but if you don't know how common this is, then it seems like you don't have much experience with low-level programming. Implying that the more common reason to need more servers is because what's exhausted first is the RAM is just not something you hear from people with experience in this industry. You think that the primary reason for horizontal scaling is dearth of RAM?? Seriously?! I remember that in the '90s or even early '00s we had some problems of not enough RAM on client workstations, but it's been a while.
In the talk he tells memory-management researchers about the economics in industry, as they reasonably might not be familiar with them, but decision-makers in industry - as a whole - are.
Now, don't get me wrong, low level languages do give you more control over resource tradeoffs. When we use those languages, sometimes we choose to sacrifice CPU for footprint and use a refcounting GC or a pool when it's appropriate, and sometimes we sacrifice footprint for less CPU usage and use an arena when it's appropriate. This control is the benefit of low level languages, but it also comes at a significant cost, which is why we use such languages primarily for software that doesn't deliver direct business value but is "pure overhead", like kernels, drivers, VMs, and browsers, or for software running on very constrained hardware.
> I didn't see any evidence.
The evidence is that in a highly competitive environment of great economic significance, where getting big payoffs is a way to get an edge over the competition, technologies that deliver high economic payoffs spread quickly. If they don't, it could be a case of some market failure, but then you'd have to explain why companies that can increase their profits significantly and/or lower their prices choose not to do so.
When you claim some technique would give a corporation a significant competitive edge, and yet most corporations don't take it (at least not for most projects), then that is evidence against that claim because usually companies are highly motivated to gain an advantage. I'm not saying it's a closed case, but it is evidence.