>Negligence includes failing to maintain and train staff properly.
>To end up in such a situation is a catastrophic engineering disaster.
That was a novel bug in the PHP runtime which manifested only in very specific PHP configurations and under a very specific load. Do you recommend hiring a PHP runtime expert just in case it repeats again? Earlier this year we also ran into a rare Linux kernel bug. Do we need to hire a Linux kernel expert, just in case? Or teach PHP programmers how to debug kernel drivers? This kind of "never seen before" stuff happens quite often under high load (even though we do load testing).
What really matters, I think, is how the entire delivery process/pipeline is designed: whether we have tests, QA, monitoring, if it's easy to revert a bad release, if we have on call engineers, tech support, backups, replicas etc. It's not realistic to have experts for every possible problem in the stack, and it's not possible to always have bug-free software; what's more important is if our engineering practices allow us to quickly recover from problems which were never seen before. And in my analogy, if we have an LLM which suddenly produces unstable code (although it passed all QA checks during testing) and no one immediately knows how to fix it, it's no different from running into a kernel, runtime or hardware bug, where the chance of anyone immediately knowing how to fix the root cause is close to zero, too. You already must have processes in place which allow you to recover from such unexpected breaking bugs quickly, with LLMs or without. Sure if the LLM crashes your production server every single day, then it's not a very useful LLM. I hope future coding LLMs will continue to improve.