Todays programmers can see copilot output and probably think “well that’s not optimal”. Fast forward five years, new CS grads are using Copilot 3.0, and are used to specific auto-completes that copilot gives for certain tasks, as they may have never needed to go beyond some of the more basic suggestions.
It “feels” like an older programmer seeing a younger web dev and going “you’re wasting MB of memory!”
While true the web has gotten slower in many regards, and indeed memory may have been wasted, business value creation typically doesn’t care if a few MB is sub optimally wasted, while the previous generation does.
The airline industry has talked about this, of course, and adoption of robotic surgery has opened up a whole new training problem, because its escape hatches when it goes wrong or can't complete a procedure is often "complete the surgery manually". Which is fine on Day 1 of robotic surgery, but what about day 2 when surgeons typically don't have hundreds of similar procedures under their belts? And where the only time they're called on to exercise the skill is in difficult edge cases?
We have basically turned driving a standard transmission into a weird old person quirk or niche enthusiant skill in the United States. If an automatic transmission required a similar manual fallback or check, how well would that work? Well, it would work fine if basically everyone already had a lot of practice driving a manual--but now? It wouldn't work well at all. Of course, automatic transmission don't fail like that, and are a lot better at switching gears than AI assistants are at generating code. I worry about the semi-automated approach to self-driving, where the driver may not actually have currency with their driving skill, and where--in the instance that it's necessary--a driver has to react to more complicated situations (they don't have practice with the simple ones, and they have to react not to a hazard but to their car's failure to react to a hazard).
I crammed a gigabyte of ASCII into a string in FreePascal, it was glorious!
Waste all the MB you want!
(I'm 59)
Can =/= Will
And as also pointed out by others, it not only requires effort, but knowledge, and that knowledge will be systematically degraded the more AI-ish code generation is used.
OTOH, those with super-diligent hacker attitudes will start to learn how to find the flaws in generated code and optimize it, so leveraging the tool, but most will just move on to the next task/ticket as soon as the AIish code passes the unit tests. So, super-leveraging AI-generated code will be rare.
How is that different than the plague of junior devs we've always had? Devs will get more senior by identifying and correcting issues in AI code rather than code from their peers. Seems OK, like we just got a whole lot more coding capacity.
If you write a good comment describing name, inputs, outputs, logic, and exceptions, the "generate code from comment" capability is kind of amazing. I'm a terrible, hacky programmer, and it has wholly converted me to documentation-first.
I can see this go both ways:
boilerplate code being a great generic solution to a set of problems, but a more seasoned programmer may say “that works, but for our use case the trade offs don’t make sense”
Or alternatively, “this code wasn’t something I knew I could do in language X, and it’s far more efficient”