I've seen way too many programs with a single exception handler right at the base of the program, that just goes "whoops, something bad happened, bye!". I've even seen this anti-pattern used with Go's panic-recover mechanism.
It's an interesting find though, that the actual performance cost for checking the error return is random, variable, and small. Good to know :)
But that is sometimes the wrong design.
If you have functions A() --> call B() --> call C() ... and C() has an error because of a memory allocation failure or a network connection being down, sometimes the best context to handle that error is the outermost function A() and not C().
That's why some programmers don't like copypasting a bunch of "if err != nil {return err}" boilerplate across layers when the intentional semantic design is to deliberately autopropagate errors up the stack. E.g. function A() might have more knowledge of the state of the world via code logic to decide whether to retry a broken network connection or simply log the error and exit.
Sometimes handling the error is orthogonal to how a nested call tree is structured. It depends.
A memory allocation failure is unexpected, and more down to the OS than the application itself; that's where a panic is in order and a last moment "something serious has happened".
In theory, Java's exception handling is supposed to do the same; checked exceptions for expected errors, unchecked for left-field things.
Anyway that aside, Go's error handling could be better because unlike e.g. the Either pattern, you're not actually required to handle errors and using _ you can easily ignore them. Second, the code style and conventions seem to tell you to just re-use an `err` variable if there's multiple errors that can occur in a function (common in e.g. file handling), which opens up the way for accidentally not checking and handling an error.
I'd say that's always the wrong design, with a few exceptions that people can expect to find only a few times on their careers.
The entire point of exceptions was to pop the errors up on the stack until you get into a level where you can treat them. The entire reason they were created was because C-style error handling consists nearly all of code popping the errors up, what made C code very hard to read. The great revolution of error handling monads was that they made popping the errors up not require extra code, thus getting the same advantage as exceptions.
Nowadays I suspecct exception hierarchies was a mistake, and that the only reasonable way to have exceptions is to have them explicit. The monadic handling normally does not copy this hierarchy and is always explicit, what makes pokemon handlers something people must go out of their way to create, instead of being the only reliable way to catch them. But going back to the C-style isn't even only reverting minor gains and keeping the large ones, the large gain is handling the errors on the correct place, that Go throws away, the minor gains are verifying things at compile time and making sure the developer knows what errors he is dealing with, that Go takes a modern take.
And one could convert one type of failure to the other. So if you call a library function and it returns the force-you-to-address kind of error, we could determine that we can't actually handle it at the call site, and just convert it to the invisible kind and let it keep going up.
The force-you-to-address it kind is enforced by the compiler. The compiler forces you to check if the function fails. A "checked failure"? "Checked error"? Hmm.
If you need the memory (or disk space) to do something, what else can you really do but wait for memory to be available? The system might just be busy, or the user might have some files they can move if prompted (multitasking systems are the norm these days!). There exists a chance memory starvation is the result of contention, in which case someone needs to give up, rollback and try again (i.e. the B() in your example), but it's much more likely that memory -- say the user asks to load a 500gb file in 50gb of ram -- that memory will never become available in which case what can you do but abort and tell the user to try something else?
What I like to do on error is signal the error and wait to be handled by some other process that can tell the difference between the above policies (by say, interrogating the system or a human operator). And I do mean wait. If the controller tells us to unwind, we unwind to that restart point, which might be as simple as returning an error code. If you're vaguely familiar with how CL's condition system works, this should sound familiar, but it's also what "Abort, Retry, Fail?" used to mean.
> Sometimes handling the error is orthogonal to how a nested call tree is structured. It depends.
On this I agree, but maybe a little bit stronger: I think for errors like this and for domain errors, an ideal error handling strategy is always orthogonal to how the nested call tree is structured (as above). Programming errors are another story -- if you make a lot of programming errors, you almost certainly want what marcus_holmes suggests.
The one advantage it has over most exception systems in my opinion is that the equivalent of try-finally is much more common than try-catch. With exceptions, code often does weird things because it isn’t expecting to lose control flow when an exception is raised, but most languages don’t make it easy to catch stack unwinding a and clean up. In Common Lisp unwind-protect plus the style of with-foo macros tends to make it more common for functions to work when control transfers out of them in abnormal ways.
With exceptions it is harder to know where or if it might be handled.
For a number of useful applications, this is exactly the right, correct, and most useful approach.
I currently maintain several successful (within our commercial niche) 100kLOC+ programs that largely use such an architecture.
It puts the error-handling code in one place, and enables common logging, recovery, filtering and display.
It means that the vast majority of the code can happily just assume that the world is full of unicorns and light.
And given that it is written in Java, the program just largely keeps on running, even in the presence of bugs and weird edge cases, and suchlike, a feature our users really like.
Human are pretty good at going "OK, so that part of the program is having a bad day, I'll report the bug and keep on using the rest of the program".
Except for not even remotely doing that:
1. if a call can fail but returns no useful value (or the caller cares little about it, and thus ignores everything it returns), Go will not complain that you're ignoring the return value entirely
2. if you have several calls which can fail, nothing forces you to actually handle all the errors, because Go doesn't check for that, it relies on the compiler error that a variable must be used:
v1, err := Foo(false)
if err != nil {
fmt.Println("error")
return
}
fmt.Println("first", v1)
v2, err := Foo(true)
fmt.Println("second", v2)
will not trigger any error, because the second calls simply reassigns to the existing `err`, which has already been used once, and thus is fine by the compiler.You could make the case that this is a footgun, sure. I prefer to think of it as giving me the right tools to make the right choice in my specific circumstances.
If they wanted to "force people" they could use Optionals and really force them.
This no more forcing than mandating checked exceptions -- the user can just return the err immediately, like in Java they can just add a throws and propagate for others to handle, or an empty try/catch and ignore it...
Surely, you can just check whether the optional has a value, use it when it is available, and ignore the other case.
That is certainly not the article's conclusion. The cost is deterministic, constant and non-negligible.
I read that as "used to be non-negligable, is now negligable"
4%-10% depending on compiler and architecture is pretty variable, to my way of thinking. YMMV.
also kinda random, in that there's nothing I can do in the code to determine how much overhead it costs, or change that (apart from ignoring Go's convention on error handling completely, which I'm not going to do because it wasn't a convention for performance reasons in the first place).
This is probably a matter of discretion. Considering the overall performance of Go applications compared to other languages, 4 to 10% is quite low. The measurement error might also be a few percent.
This is always the wrong way to handle errors.
If a function returns an 'error' that needs be handled at the call site, then it isn't an error, it's a variant return type.
Errors are things that can't be recovered from but must be handled to release resources.
You want this to happen in some central place, not scattered ad-hoc in every place where you use resources; releasing them by hand is worse than manual memory management.
Not all errors require the same treatment and there isn’t a single strategy to manage them.
If for some reasons the project consider that checking errors should be enforced, that’s simple to do by using go-lint or other linters.
which is sometimes impossible to do in any meaningful way which just leads people to put panic in there making the end-user experience much worse than having an exception handler at the base of the program / event loop
Regardless of one's view of execution handling, why would anyone even bother to do this? If you don't catch it and exit the program will exit anyway.
[1] I routinely remove "catch and rethrow" from our code base exactly for this reason. There are ways to log and add metadata to in flight exceptions that don't require rethrowing.
Or, to put it more clearly: there are no errors, only conditions that you dislike. It's better to not burden your programming with your emotional shortcomings, and treat all conditions that you may encounter on an equal footing.
You try to open a file; the file may or may not exist, and both cases are equally likely and you get to decide what your program does in each case. No need to attach an emotionally charged label like "error" in one of the two cases of the conditional. Or worse, as some emotional fanatics do, to bend an otherwise clean programming language by adding features (e.g., exceptions) that help support your sentimental disposition.
Both cases are not equally likely, though. Also, this article is not about the philosophical approach to naming errors versus exceptions. It's about the performance of two technical approaches to handling exceptional/unlikely circumstances.
Of course, if you call fopen with uniformly distributed random filenames then it is extremely unlikely than such files will exist. Thus it will fail with probability essentially 1. Yet, I don't want my programming language to force me to make an asymmetric distinction between the two cases.
By "equally likely" I don't mean "having equal probability to occur". This is very difficult to model, and it will depend mostly on the usage patterns of the users of the program. I mean that both cases are worth of the same attention and merit an equivalently serious treatment. No need to disparage one of the two cases as an "error" or an "exception" and require a special language construct.
That might be true for smaller code bases (tracking down exceptions generated from libraries called from libraries, fun!), or code bases where you don't use closed external libraries (that can generate unknowable exceptions), or you use only synchronous code (because asynchronous exceptions wind up jumping to fishkill, welcome to distributed systems (logically, physically or chronologically distributed)).
[EDIT] fixed thinko
[1] https://www.youtube.com/watch?v=inrqE0Grgk0&t=15126s
[2] https://docs.google.com/presentation/d/1WVu4O-ax7punUC2V_XgT...