>You can't just absolve yourself of responsibility like that. You, the human, are responsible for every single thing the computer does in production, and if you absolve yourself of ownership by leaning on an LLM you end up risking catastrophic helplessness
>So you'd better be confident the LLM can debug every issue that will ever come up otherwise your decision to use the LLM could come back at you really hard.
Most of our programmers are PHP devs. They don't know any C. Once, we hit a bug in the PHP runtime which sporadically crashed our entire application. None of the PHP devs were able to fix the bug because they had no experience debugging C code, let alone the PHP runtime specifically. Fortunately, I had experience with C so I was able to research PHP's source code, and trace the crashes to a memory corruption bug in PHP which only surfaced when a very specific set of options was enabled and only under a high production load (so we did not see it during testing). We reverted the changes and the bug disappeared.
What would happen if there was no one to investigate and find the root cause of the bug? Without knowing the cause, they'd probably first try to revert the changes ASAP and that would already solve the problem for the customers. The situation is pretty similar to what you're describing: there's a class of problems which requires knowing what happens "under the hood" at a lower level, and many shops, especially, say, in webdev, don't have the luxury of having engineers which know all ins and outs of the entire system. So this situation can happen any time without any LLMs involved: hardware failures, a kernel bug, a runtime bug -- they all can catch you unprepared.
My point is, the risk is definitely there ("I have no idea what's happening and how to fix it") but it's not something novel and can happen without LLMs, and people usually find workarounds. As for debuggability, although LLMs can produce pretty bad code that is harder to debug, I think it's still debuggable by a human, in case of a rare event when even a sufficiently smart LLM cannot debug the problem. The code which, say, ChatGPT generates, is pretty readable and understandable.