I like this. What's more, while AI-generated art has a characteristic sameyness to it, the human-produced art stands out in its originality. It has character and soul. Even if it's bad! AI slop has made the human-created stuff seem even more striking by comparison. The market for human art isn't going anywhere, just like the audience for human-played chess went nowhere after Deep Blue. I think people will pay a premium for it, just to distinguish themselves from the slop. The same is true of writing and especially music. I know of no one who likes listening to AI-generated music. Even Sabrina Carpenter would raise less objection.
The same, I'm afraid, cannot be said for software—because there is little value for human expression in the code itself. Code is—almost entirely—strictly utilitarian. So we are now at an inflection point where LLMs can generate and validate code that's nearly as good, if not better, than what we can produce on our own. And to not make use of them is about as silly as Mel Kaye still punching in instruction opcodes in hex into the RPC-4000, while his colleagues make use of these fancy new things called "compilers". They're off building unimaginably more complex software than they could before, but hey, he gets his pick of locations on the rotating memory drum!
I'm one of the nonexistent anti-LLMers when it comes to software. I hate talking to a clanker, whose training data set I don't even have access to let alone the ability to understand how my input affects its output, just to do what I do normally with the neural net I've carried around in my skull and trained extensively for this very purpose. I like working directly with code. Code is not just a product for me; it is a medium of thought and expression. It is a formalized notation of a process that I can use to understand and shape that process.
But with the right agentic loops, LLMs can just do more, faster. There's really no point in resisting. The marginal value of what I do has just dropped to zero.