Even if you don't let it author or write a single line of code, from collecting information, inspecting code, reviewing requirements, reviewing PRs, finding bugs, hell even researching information online, there's so many things it does well and fast that if you're not leveraging it, you're either in denial or have ai skill issues period.
Meanwhile, if you grift hard enough, you can become CEO of a trillion dollar company or President of the United States. Young people are being raised today seeing that you can raise billions on the promise building self driving cars in 3 years, not deliver even after 10 years, and nothing bad actually happens. Your business doesn't crater, you don't get sued into oblivion, your reputation doesn't really change. In fact, the bigger the grift, the more people are incentivized to prop it up. Care and professionalism are dead until we go back to an environment that is not so nurturing for grifts.
Even if you limit your AI experience to finding information online through deep research it's such a time saver and productivity booster that makes a lot of difference.
The list of things it can do for you is massive, even if you don't have it write a single line of code.
Yet the counter argument is like "bu..but..my colleague is pushing slop and it's not good at writing code for me", come on, then use it at things it's good at, not things you don't find it satisfactory.
For some things LLMs are like magic. For other things LLMs are maddeningly useless.
The irony to me is anyone who says something like "you don't know how to use the LLM" actually hasn't explored the models enough to understand their strengths/weaknesses and how random and arbitrary the strengths and weakness are.
Their use cases happen to line up with the strengths of the model and think it is something they are doing special themselves when it is not.
Feel free to cite said data you've seen supporting this argument.