Without going too philosophical, although one is not unjustified in going there, and just focusing on my own small professional corner (software engineering): these llm developments mostly kill an important part of thinking and might ultimately make me dumber. For example, I know what a B tree is and can (could) painstakingly implement one when and if I needed to, the process of which would be long, full of mistakes and learning. Now, just having a rough idea will be enough, and most people will never get the chance to do it themselves. Now B-tree is an intentionally artificial example, but you can extrapolate that to more practical or realistic examples. On a more immediate front, there's also the matter of threat to my livelihood. I have significant expenses for the foreseeable future, and if my line of work gets a 100 or even 10x average productivity boost, there just might be less jobs going around. Farm ox watching the first internal combustion tractors.
I can think of many other reasons, but those are the most pressing and personal to me.
I have and I don't see the connection with AI-assisted coding.
If your comment was about "generative AI in general" then I think this is the problem with trying to discuss AI on the internet at the moment. It quickly turns into "defend all aspects of AI or else you've lost". I can't predict all aspects of AI. I don't like all aspects of AI and I can't weigh up the pros and cons of a vast number of distinct topics all at once. (and neither, I suspect, can anyone else)