This isn’t to say there’s not hype. Just that if you’re not seeing big productivity gains you need to make sure you really are an outlier and not just surplus to requirements.
Rather, I hear a lot of nuanced opinions of how the tech is useful in some scenarios, but that the net benefit is not clear. I.e. the tech has many drawbacks that make it require a lot of effort to extract actual value from. This is an opinion I personally share.
In most cases, those "big productivity gains" are vastly blown out of proportion. In the context of software development specifically, sure, you can now generate thousands of lines of code in an instant, but writing code was never the bottleneck. It was always the effort to carefully design and implement correct solutions to real-world problems. These new tools can approximate this to an extent, when given relevant context and expert guidance, but the output is always unreliable, and very difficult to verify.
So anyone who claims "big productivity gains" is likely not bothering to verify the output, which in most cases will eventually come back to haunt them and/or anyone who depends on their work. And this should concern everyone.
This is overly dismissive, there are many things that are possible now that weren't before because writing the code is no longer the bottleneck, like porting parts of the codebase from managed to unmanaged for teams with limited capacity. Writing code is about 1/3rd of the job. Another 1/3rd is analysis, which also benefits from AI allowing people who aren't very good at it to outperform. The final 1/3rd is-
> the effort to carefully design and implement correct solutions to real-world problems.
That's problem-solving - that part doesn't get sped up, and likely never will, reliably.
The only downside is not learning about method Y or Z that work differently than X but would also be sufficient, and you don’t learn the nuances and details of the problem space for X, Y, and Z.