No, I am saying that you are trading what you spend effort on when using LLM's. Do you give up the effort of writing for the effort of review? Weather you should and if you will benefit really depends on your expertise and skill set.
If you're a novice, or even intermediate, use AI in a feedback kind of way, see if it can pinpoint and explain problems with your current code and offer feedback/recommendations. Not code, but dot point human language feedback like you'd get from a senior dev.
If the AI is writing code that is beyond what you could produce yourself, you have no business trusting the code the AI produces. This is because, as anyone who can produce high level code will tell you, AI is wrong too frequently to blindly trust. You have to engage critically with output. Full auto is a disaster waiting to happen.
The problem is, for many, good looking but fundamentally flawed code is indistinguishable from the real deal, and so they simply outsource their ability to critically think to a thing that simply cannot. This is a big problem and the evidence for this claim is littered throughout this comment section.