Both RAII and `defer` have proven to be highly useful in real-world code. This seems like a good addition to the C language that I hope makes it into the standard.
By the way, GCC and Clang have attribute((cleanup)) (which is the same, scope-based clean-up) and have done for over a decade, and this is widely used in open source projects now.
In Golang if you iterate over a thousand files and
defer File.close()
your OS will run out of file descriptorsI think that defer is actually limited in ways that are good - I don't see it introducing surprising control flow in the same way.
> RAII has also proven to be quite harmful in cases
The downsides of defer are much worse than the "downsides" of RAII. Defer is manual and error-prone, something that you have to remember to do every single time.
It allows library authors to take responsibility for cleaning up resources in exactly one place rather than forcing library users to insert a defer call in every single place the library is used.
But yeah, RAII can only provide deterministic destruction because resource acquisition is initialization. As long as resource acquisition is decoupled from initialization, you need to manually track whether a variable has been initialized or not, and make sure to only call a destruction function (be that by putting free() before a return or through 'defer my_type_destroy(my_var)') in the paths where you know that your variable is initialized.
So "A limited form of RAII" is probably the wrong way to think about it.
I got out my 4e Stroustrup book and checked the index, RAII only comes up when discussing resource management.
Interestingly, the verbatim introduction to RAII given is:
> ... RAII allows us to eliminate "naked new operations," that is, to avoid allocations in general code and keep them buried inside the implementation of well-behaved abstractions. Similarly "naked delete" operations should be avoided. Avoiding naked new and naked delete makes code far less error-prone and far easier to keep free of resource leaks
From the embedded standpoint, and after working with Zig a bit, I'm not convinced about that last line. Hiding heap allocations seems like it make it harder to avoid resource leaks!
The extra braces appear to be optional according to the examples in https://www.open-std.org/JTC1/SC22/WG14/www/docs/n3734.pdf (see pages 13-14)
Though I do wonder what the chances are that the C subset of C++ will ever add this feature. I use my own homespun "scope exit" which runs a lambda in a destructor quite a bit, but every time I use it I wish I could just "defer" instead.
Then again, if someone is willing to push it through WG21 no matter what, maybe.
C++ would be a nicer language with native defer. Working directly with C APIs (which is one of the main reasons to use C++ over Rust or Zig these days) would greatly benefit from it.
It would run regardless of if malloc succeeded or failed, but calling free on a NULL pointer is safe (defined to no-op in the C-spec).
resource, err := newResource()
if err != nil {
return err
}
defer resource.Close()
IMO this pattern makes more sense, as calling exit behavior in most cases won't make sense unless you have acquired the resource in the first place.free may accept a NULL pointer, but it also doesn't need to be called with one either.
Related blog post from last year: https://thephd.dev/c2y-the-defer-technical-specification-its... (https://news.ycombinator.com/item?id=43379265)
https://oshub.org/projects/retros-32/posts/defer-resource-cl...
Of course, that idea already isn’t correct in many languages; function arguments are evaluated before a function is called, operator precedence often breaks it, etc, but this moves entire statements, potentially by many lines.
Genuinely curious as I only have a small amount of experience with c and found goto to be ok so far
In any case, the biggest advantage IMO is that resource acquisition and cleanup are next to each other. My brain understands the code better when I see "this is how the resource is acquired, this is how the resource will be freed later" next to each other, than when it sees "this is how this resource is acquired" on its own or "this is how the resource is freed" on its own. When writing, I can write the acquisition and the free at the same time in the same place, making me very unlikely to forget to free something.
using (var resource = acquire()) {
} // implicit resource.Dispose();
While we don't have the same simplicity in C because we don't use this "disposable" pattern, we could still perhaps learn something from syntax and use a secondary block to have scoped defers. Something like: using (auto resource = acquire(); free(resource)) {
} // free(resource) call inserted here.
That's no so different to how a `for` block works: for (auto it = 0; it < count; it++) {
} // automatically inserts it++; it < count; and conditional branch after secondary block of for loop.
A trivial "hack" for this kind of scoped defer would be to just wrap a for loop in a macro: #define using(var, acquire, release) \
auto var = (acquire); \
for (bool var##_once = true; var##_once; var##_once = false, (release))
using (foo, malloc(szfoo), free(foo)) {
using (bar, malloc(szbar), free(bar)) {
...
} // free(bar) gets called here.
} // free(foo) gets called here.Lovely fairy tale. Now can you tell me how you love to scroll back and examine all the defer blocks within a scope when it ends to understand what happens at that point?
But it adds a new dimension of control flow, which in a garbage collected language like Go is less worrisome whereas in C this can create new headaches in doing things in the right order. I don't think it will eliminate goto error handling for complex cases.
But people know it from other languages, and seem to like it, so I guess it is good to have it also in C.
http://robertseacord.com/wp/2020/09/10/adding-a-defer-mechan...
Cleanup is good. Jumping around with "goto" confused most people in practice. It seems highly likely that most programmers model "defer" differently in their minds.
EDIT:
IIRC it was CVE-2025-26465. Read the code and the patch.
2. Defer is mostly useful for C++ code that needs to interact with C API because these two are fundamentally different. C API usually exposes functions "create_something" and "destroy_something", while the C++ pattern is to have an object that has "create_something" hidden inside its constructor, and "destroy_something" inside its destructor.
For example if I have a ffi function that transfers the ownership of some allocator in the middle of the function.
Once they do learn about defer they will come to appreciate it much more.
The point of a CS degree is to know the fundamentals of computing, not the latest best practices in programming that abstract the fundamentals.
learning Python first is same difficulty as learning C first (because main problem is the whole concept of programming)
and learning C after Python is harder than learning Python after C (because of pointers)
But there are lots of cases in the kernel where we have 10+ goto labels for error paths in complex setup functions. I think when this starts making its way into those areas it will really start having an impact on bugs.
Sure, most of those bugs are low impact (it's rare that an attacker can trigger the broken error paths) but still, this is basically free software quality, it would be silly to leave it on the table.
And then there's the ACTUAL motivation: it makes the code look nicer.
#define RETURN(x) result=x;goto CLEANUP
void myfunc() {
int result=0;
if (commserror()) {
RETURN(0);
}
.....
/* On success */
RETURN(1);
CLEANUP:
if (myStruct) { free(myStruct); }
...
return result
}
The advantage being that you never have to remember which things are to be freed at which particular error state. The style also avoids lots of nesting because it returns early. It's not as nice as having defer but it does help in larger functions.You also don't have to remember this when using defer. That's the point of defer - fire and forget.
There are several other issues I haven't shown like what happens if you need to free something only when the return code is "FALSE" indicating that something failed.
This is not as nice as defer but up till now it was a comparatively nice way to deal with those functions which were really large and complicated and had many exit points.
result=x;
goto cleanup;
if you meant result=x;
goto cleanup;
At least then you'll be able to follow the control flow without remembering what the magic macro does.I also dislike RAII because it often makes it difficult to reason about when destructors are run and also admits accidental leaks just like defer does. Instead what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.
About RAII, I think your viewpoint is quite baffling. Destructors are run at one extremely well-defined point in the code: `}`. That's not hard to reason about at all. Especially not compared to often spaghetti-like cleanup tails. If you're lucky, the team does not have a policy against `goto`.
Indeed, `defer` as a language feature is an anti-pattern.
It does not allow the abstraction of initialization/de-initialization routines and encapsulating their execution within the resources, transferring the responsibility to manually perform the release or de-initialization to the users of the resources - for each use of the resource.
> I also dislike RAII because it often makes it difficult to reason about when destructors are run [..]
RAII is a way to abstract initialization, it says nothing about where a resource is initialized.
When combined with stack allocation, now you have something that gives you precise points of construction/destruction.
The same can be said about heap allocation in some sense, though this tends to be more manual and could also involve some dynamic component (ie, a tracing collector).
> [..] and also admits accidental leaks just like defer does.
RAII is not memory management, it's an initialization discipline.
> [..] what I would want is essentially a linear type system in the compiler that allows one to annotate data structures that require cleanup and errors if any possible branches fail to execute the cleanup. This has the benefit of making cleanup explicit while also guaranteeing that it happens.
Why would you want to replicate the same cleanup procedure for a certain resource throughout the code-base, instead of abstracting it in the resource itself?
Abstraction and explicitness can co-exist. One does not rule out the other.
I would not introduce zig’s errdeferr though. That one would need additional semantics changes in C to express errors.
It starts out small. Then before you know the language is total shit. Python is a good example.
I am observing a very distinguishable phenomenon when internet makes very shallow ideas mainstream and ruin many many good things that stood the test of time.
I am not saying this is one of those instances, but what the parent comment makes sense to me. You can see another comment who now wants to go further and want destructors in C. Because of internet, such voices can now reach out to each other, gather and cause a change. But before, such voices would have to go through a lot of sensible heads before they would be able to reach each other. In other words, bad ideas got snuffed early before internet, but now they go mainstream easily.
So you see, it starts out slow, but then more and more stuff gets added which diverges more and more from the point.
With respect, that sounds a bit nuts. It's been 37 years since C89; unless you're targeting computers that still have floppy drives, why give up on so many convenience features? Binary prefixes (0b), #embed, defined-width integer types, more flexibility with placing labels, static_assert for compile-time sanity checks, inline functions, declarations wherever you want, complex number support, designated initializers, countless other things that make code easier to write and to read.
Defer falls in roughly the same category. It doesn't add a whole lot of complexity, it's just a small convenience feature that doesn't add any runtime overhead.
The one huge advantage of C is its ubiquity - you can use it on the latest shiny computer / OS / compiler as well as some obscure embedded platform with a compiler that hasn't been updated since 2002. (That's a rare enough situation to be unimportant, right? /laughs in industrial control gear.)
I'm wary of anything which fragments the language and makes it inaccessible to subsections of its traditional strongholds.
While I'm not a huge fan of the "just use Rust" attitude that crops up so often these days, you could certainly make an argument that if you want modern language features you should be using a more modern language.
(And, for the record, I do still write software - albeit recreationally - for computers that have floppy drives.)
You're missing out on one of the best-integrated and useful features that have been added to a language as an afterthought (C99 designated initialization). Even many moden languages (e.g. Rust, Zig, C++20) don't get close when it comes to data initialization.
I think defer{} can simplify these flows sometimes, so it can indeed be useful for good old style C.
If you want to write C++, write C++. If you want to write C, but want resource cleanup to be a bit nicer and more standard than __attribute__((cleanup)), use C with defer. The two are not comparable.
Goto approach also covers some more complicated cases
Is that supposed to exacerbate how poor that choice is. External assembly is great.
If you can't compile K&R, you should label your language "I can't believe it's not C!".
I don't have time to learn your esolang.
You can just look at the code in front of you to see what defer is doing. With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen.
Sure, if the situation arises frequently, it's nice to be able to design a type that "just works" in C++. But if you need to clean up reliably in just this one place, C++ destructors are a very clunky solution.
> With destructors, you need to know what type you have (not always easy to tell), then find its destructor, and all the destructors of its parent classes, to work out what's going to happen
Isn't it a code quality issue? It should be clear from class name/description what can happen in its destructor. And if it's not clear, it's not that relevant.
The classical case of 'one destructor per class' would require to design the entire code base around classes which comes with plenty of downsides.
> Anyone who writes C should consider using C++ instead
Nah thanks, been there, done that. Switching back to C from C++ about 9 years ago was one of my better decisions in life ;)
It can. An object with destructor doing clean-up should be created only after such clean-up is needed. In case of a file, for example, a file object should be created at file opening, so that it can close the file in its destructor.
The decision would be easier if the C subset in C++ would be compatible with modern C standards instead of being a non-standard dialect of C stuck in ca. 1995.
Would be a bit clunky, but that can (¿somewhat?) be hidden in a macro, if desired.
Defer takes 10 lines to implement in C++. [1]
You don't have to wait 50 years for a committee to introduce basic convenience features, and you don't have to use non-portable extensions until they do (and in this case the __attribute__((cleanup)) has no equivalent in MSVC), if you use a remotely extensible language.
[1] https://www.gingerbill.org/article/2015/08/19/defer-in-cpp/
My comment is targeted towards the programmer who is excited about features like this - you can add an extra two characters to your filename and trivially implement those improvements (and more) yourself, without any alterations to your codebase or day to day programming style.