Now I'm sure that kernel-level stuff wasn't designed like this, and probably won't be for some time (I'm hoping Rust changes that story, but not a low-level coder), so debuggers may still be necessary- just making a point that it seems possible to design in such a way that the need for one is drastically reduced, if not eliminated. Or at least, that's how it seems to me in high-level-language-land (things may differ at the assembly and C levels).
EDIT: I apologize for OT, probably not the ideal place to start a "are debuggers really necessary if you design code correctly?" discussion, after all this is part 3 of a series, my bad
Meanwhile I'll rely on the smelly old debugger to dig me out of threading weirdness, memory corruption, resource leakages, other people's bugs, and to provide a window into a combinatorially massive input-dependent state my feeble mind simply can't fathom while writing tests for a tiny corner of a typical system that in aggregate amounts to millions of LOC.
Just as an FYI, most of these are theoretically eliminated or at least drastically mitigated by a switch from mutable state to immutable, purely-functional data structures. The cost of course being somewhat increased memory consumption and a bit more slowness (or a lot, depending on the algorithm).
If you ever get a chance to work with the BEAM VM, it's pretty tight.
For example, if I'm working on a Python script under 1000 lines I would never use a debugger. I might use wireshark or linux system tools if something was really wrong, but usually the trackback or program output is sufficient.
I've also tried to debug an Arduino project where I had a lot less knowledge of how the system works, and if I wasn't able to attach a debugger I don't want to know how long it would have taken me to find the problem (calling a null function pointer causes an interrupt that didn't have any handler).
I've also tried to debug other people's projects in high level languages, without a debugger where after taking about an hour to figure it out based on pure reason, realizing it would have been very quick and easy to find with a debugger.
I think if you look at what a program is doing, and at that point you can't form a testable hypothesis as to what is happening, at that point a debugger will certainly be faster if you know how to use it.
Well, I mean... Ever wonder why that is? Your understanding of the various states the program can get into is fresh in your mind. Come back to it in a year after not looking at it all that time and suddenly you're relying hardcore on your unit test suite to contain that same knowledge in "automated proof" form... OR you start spending a lot of time in the debugger to "re-understand" the system, slowly and manually... Or both.
If you need to use a debugger, it's because you are in some unexpected state. With full understanding comes full control and in those cases, no, you need neither a debugger NOR a test suite, but since a test suite on well-written code already identifies many possible state failures, I've found that leaning on that instead of a debugger works out better in the end.
Another debugger I now recommend people (in general) to look at is Squeak Smalltalk's debugger which allows a form of "top down" programming where undefined functions are called and filled in at runtime. The debugger/editor in this post borrows a little bit from that idea in spirit. Part 3 in particular shows how it can be used even when they are no bugs.
About being off-topic, well, no-one has made another top level comment and by the time of your first post it was already 5 hours in. Some discussion is definitely better than none. So thank you for inciting that. If it burried some more on-topic discussion, that would be unfortunate though.
However every once in a while I need to use gdb + remote debugging features to figure out a particularly weird runtime issue and I'm glad it's there.
Then I can often take the odd values I see in the debugger and start putting them into a test or a minimal repro.