I've been thinking about this - you're right that LLMs are not going to be deterministic (AIUI) when it comes to producing code to solve a problem.
BUT neither are humans, if you give two different humans the same task, then, unless they copy one another, then you will get two different results.
Further, as those humans evolve through their career, the code that they produce will also change.
Now, I do want to point out that I'm very much still at the "LLMs are an aid, not the full answer.. yet" point, but a lot of the argument against them seems to be (rapidly) coming to the point where it's no longer valid (AI slop and all).