I agree, and if I say things that are incorrect, then I definitely want to fix them, because I value being correct.
But what I am meeting in this thread is people wanting to do some language-lawyer version of trying to prove I am incorrect, without addressing the substance of what I am actually saying. I think your replies have been the only exception to this (and only just).
I realize my original posting was pretty brusque, but, the article was very bad and I am very concerned with the ongoing deterioration of software quality, and the hivemind responses to articles like this on HN, I think, are part of the problem.
I know that Rust people are also concerned with software quality, and that's good. I just think most of Rust's theories about what will help, and most of the ways these are implemented semantically, are just wrong.
So if something I am saying doesn't seem to make sense, or seems "incorrect", well, maybe it's that I am just coming from a very different place in terms of what good programming looks like. The code that I write just looks way different from the code you guys write, the things I think about are way different, etc. So that probably makes communication much harder than it otherwise would be, and makes it much easier to misunderstand things.
On the technical topic being discussed here...
Using a bump allocator in the way you just did, on the stack for local code that uses the bump allocator right there, is semantically correct, but not a very useful usage pattern. In a long-running interactive application, that is being programmed according to a bulk allocation paradigm that maybe is "data oriented" or whatever the kids call it these days, there are pretty much 4 memory usage patterns that you ever care about:
(1) Data baked into the program, or that is initialized so early at startup that you don't have to worry about it. [This is 'static in Rust].
(2) Data that probably lives a long time, but not the whole lifetime of the program, and that will be deallocated capriciously at some point. (For example, an entry in a global state table).
(3) Data that lasts long enough that local code doesn't have to care about its lifetime, but that does not need to survive long-term. For example, a per-frame temporary arena, or a per-job arena that lasts the lifetime of an asynchronous task.
(4) Data that lives on the stack, thus that can't ever be used upward on the stack.
Now, the thing is that category (3) was not really acknowledged in a public way for a long time, and a lot of people still don't really think of it as its own thing. (I certainly didn't learn to think about this possibility in CS school, for example). But in cases of dynamic allocation, category (3) is strictly superior to (4) -- because it's approximately as fast, and you don't have to worry about your alloca trying to survive too long. You can whip up a temporary string and just return it from your function and nobody owns it and it's fine. So having your program really lean on (3) in a big way is very useful. This is what I was saying before about pretending to have a garbage collector, but you don't pay for it.
So if you are doing a fast data-oriented program (I don't really use the term "data-oriented" but I will use it here just for shorthand), dynamic allocations are going to be categories 1-3, and (4) is just for like your plain vanilla local variables on the stack, but these are so simple you just don't need to think about them much.
Insofar as I can tell, all this lifetime analysis stuff in Rust is geared toward (4). Rust wants you to be a good RAII citizen and have "resources" owned by authoritative things that drop at very specific times. (The weird thing about "resources" is that in reality this almost always means memory, and dealing with memory is very very different from dealing with something like a file descriptor, but this is genericized into "resources", which I think is generally a big mistake that many modern programming language people make).
With (1), you don't need any lifetime checking, because there is no problem. With (2), well, you can leak and whatever, but this is sort of just your problem to make sure it doesn't happen, because it is not amenable to static analysis. With (3), you could formalize a lifetime for it, but it is just one quasi-global lifetime that you are using for lots of different data, so this by definition cannot do very much work for you. You could use it to avoid setting a global to something in category (3), and that's useful to a degree, but in reality this problem is not hard to catch without that, and it doesn't seem worth it to me in terms of the amount of friction required to address this problem. Then there is (4), which, if you are not programming in RAII style, you don't really need checking very much??, because everything there is simple, and anyway, the vast majority of common stack violations are statically detectable even in C (the fact that C compilers did not historically do this is really dumb, and has been a source of much woe, but like, it is very easy to detect if you return a pointer to a local from a function, for example. Yes, this is not thorough in the way Rust's lifetime checking is, and this class of analysis will not catch everything Rust does, but honestly it will catch most of it, at no cost to the programmer).
So when I said "Rust does not allow you to do bulk memory allocation" what I am saying is, the way the language is intended to be used, you have most of your resources being of type (4), and it prevents you from assigning them incorrectly to other resources of type (4) but that have shorter lifetimes, or to (2) or (1).
But if almost everything in (4) is so simple you don't take pointers to it and whatnot, and if most of your resources are (3), they have the same lifetime as each other, all over the place, so there is no use checking them against each other. So now the only benefit you are getting is ensuring that you don't assign (3) to (2) or (1). But the nature of (3) is such that it is reset from a centralized place, so that it is easy, for example, to wipe the memory each frame with a known pattern to generate a crash if something is wrong, or, if you want something more like an analytical framework, to do a Boehm-style garbage collector thing on your heap (in Debug/checked builds only!) to ensure that nothing points into this space, which is a well-defined and easy thing to do because there is a specific place and time during which that space is supposed to be empty.
So to me "programming in Rust" involves living in (4) and heavily using constructors and destructors, whereas I tend to live in (3) and don't use constructors or destructors. (I do use initializers, which are the simple version of constructors where things can be assigned to constant values that do not require code execution and do not involve "resources" -- basically, could you memcpy the initial value of this struct from somewhere fixed in memory). Now the thing that is weird is that maybe "programming in Rust" has changed since last time I argued with Rust people. It seems that it used to be the sentiment that one should minimize use of unsafe, that it should just be for stuff like lockfree data structure implementation or using weird SIMD intrinsics or whatever, but people in this thread are saying, no man, you just use unsafe all over the place, you just totally go for it. And with regard to that, I can just say again what I said above, that if your main way of using memory is some unsafe stuff wrapped in a pretend safe function, then the program does not really have the memory safety that it is claiming it does, so why then be pretending to use Rust's checking facilities? And if not really using those, why use the language?
So that's what I don't get here. Rust is supposed to be all about memory safety ... isn't it? So the "spirit of Rust" is something about knowing your program is safe because the borrow checker checked it. If I am intentionally programming in a style that prevents the borrow checker from doing its job, is this not against the spirit of the language?
I'll just close this by saying that one of the main reasons to live in (3) and not do RAII is that code is a lot faster, and a lot simpler. The reason is because RAII encourages you conceptualize things as separately managed when they do not need to be. This seems to have been misunderstood in many of the replies above, as people thinking I am talking about particular features of Rust lifetimes or something. No, it is RAII at the conceptual level that is slow.
> We had a keynote at Rustconf about how useful generational arenas are as a technique in Rust.
If that's the one I am thinking of, I replied to it at length on YouTube back in 2018.