> You do it in a loop. Keep looping and fixing until the code runs.
I suppose, but we shouldn't need to brute force our tools into working...
And as you point out in that article, some of these issues won't be caught by the compiler or interpreter. Where we disagree is that I think most of these are introduced by the inherent problem of hallucination, not because the model is not large enough, or wasn't trained on the right data. I.e. I don't think this is something we can engineer our way out of, but that solving it will require changes at the architectural level.
Yes, ultimately we still need existing software engineering practices to confirm that the output is correct, but in the age of "vibe coding", when people are deploying software that they've barely inspected or tested (many of whom don't even have the skills or experience to do so!), built by tools that can produce thousands of lines of code in an instant, all of those practices go out the window. This should scare all of us, since it will inevitably make the average quality of software go down.
I reckon that the amount of experienced programmers who will actually go through that effort is miniscule. Realistically, reviewing and testing code requires a great deal of effort, and is often not the fun part of the job. If these tools can't be relied on to help me with tasks I sometimes don't want to do, and if I have to babysit them at every step of the way, then how much more productive are they making me? There's a large disconnect between how they are being promoted and how they're actually used in the real world.
Anyway, it's clear that we have different views on this topic, and we use LLMs very differently, but thanks for the discussion. I appreciate the work you're doing, and your content is always informative. Cheers!