> Errors are normal, not exceptional.
The _handling_ of errors is normal. Code that doesn't consider errors is not production code.
And granted, in Go, control flow is driven by errors more often than in C++ or Java. Sentinel error values are common. See for example all usage of error.Is, checking for io.EOF, packages that define ErrSituationA and ErrSituationB, etc.
But my argument was about errors that can't be dealt with locally, where the origination and ultimate handling are very far apart. A given flow will encounter these errors relatively rarely compared to the happy path (and if it's not rare, you probably need to fix or change something). Having an intuition about this is important for predicting your code's performance. For example:
- The SQL call failed because the network connection dropped; client gets 500 or 502, or retry.
- A call to an external service failed because the network was bad; it gets retried.
- The SQL call succeeded, but the record the client asked for wasn't found, so the client gets a 404.
- Writing to a temporary file failed because the disk is full, so some batch job fails with an error.
Apart from potential concerns about DoS, worrying too much about the performance of error handling in these relatively rare cases is absolutely premature optimization.
DoS isn't even a concern. I just benchmarked capturing a call stack in Go, and it's on the order of a few microseconds. Unless you're in performance critical code (and you're benchmarking, right?), it's fine.