IMHO when debugging software needs to be automated to the point that a library needs to be written (and itself debugged), there's an underlying problem which can't be solved with adding more layers of complexity - as that will only introduce more bugs. When these bugs are in the software you're using to debug, things can quickly take a turn for the worse.
Is that code really taking the user's input (a string), getting object addresses and offsets (numbers) from that, then converting those into strings to build a command string, which then gets parsed back into numbers for the debugger to ultimately use to create a watchpoint? I think that is itself a good example of how the "more code, more bugs" principle can apply: all this superfluous conversion code has introduced a bug.
Here's a good article about that, although it doesn't mention the situation where the bugs you introduce end up being in the software you need to use to remove bugs...
http://blog.codinghorror.com/the-best-code-is-no-code-at-all...
I've hacked plenty of GDB. It's just a program like anything else. Why wouldn't you want to debug it? There's nothing mystical or special about it. When you combine th features you describe with code (which are, in fact, quite non-trivial) with code to look up symbols, deal with remote gdbserver instances, papering over extreme differences between operating systems and executable formats, you end up with a very complex program that has the same maintenance needs as any other.
Nobody discuss about mystical or special in debug a debugger, but everything is about to invest lot of energy and time. Even launching GDB is a pain. And theres's a reason why many research live debugging using Smalltalk. When you debug a debugger you want as much as reflection as possible and Smalltalk has reified almost all of the internal machinery (see for example Moldable Debugger).
What is the underlying problem you're referring to?
It is admittedly a rabbit hole when you're writing code to debug code. But isn't that the whole spirit of writing tools to make your life easier and more efficient? We pile new abstraction layers atop more and more abstraction layers, and then we get computing in its impressive form as it exists today.
GDB and LLDB and any other debugger are huge software libraries for debugging, and yeah they might introduce bugs as well, but that does that mean they shouldn't be used or that they're not useful? I find it quite useful that LLDB has a scripting interface to automate debug sequences I find myself doing over and over. And since there is a scripting interface, we can find open source libraries for common debug tasks so when there is a bug, the bug is shallow due to many eyes.
IE given C++, or objective C, what gets described to the debugger by the compiler requires the debugger to know and do a lot to get actual values out of.
For C++, it's actually pretty good except function calls/etc require understanding the ABI. IE the debug info i get tells me "if i want to get the value of this variable, i evaluate this expression". I know the layouts of how to interpret it, etc. It's rare the expression is too complicated (though it may require piecing together registers and memory, etc, it's jsut a state machine).
For Objective C, even things like "instance variables" require the debugger understand a lot. Java has a fairly reasonable agent, etc.
Part of this is that the type systems of the debug info formats (DWARF, etc) are very simple, so even though they theoretically support things like function calls, etc, it's rarely used to provide the functionality necessary, and the debugger is left having to do it itself.
But the pull request he created (https://github.com/facebook/chisel/pull/117) didn't fix that; instead it just replaced that function call with manually parsing the literal in the one specific place he was having problems with.
Did I miss something there? That seems like a really weird "solution". Why not just fix the original function?