Answers don't exist in a vacuum. The chat interface allows feedback and corrections. Users can paste an error they're getting, or even say "it doesn't work", and GPT may correct itself or suggest an alternative.
> Answers don't exist in a vacuum. The chat interface allows feedback and corrections. Users can paste an error they're getting, or even say "it doesn't work", and GPT may correct itself or suggest an alternative.
I think you're making the mistake of viewing the job as a black box that produces output.
But what you're proposing is a terrible way to develop someone's skills and judgement. They won't develop if they're getting their hand held all the time (by an LLM or a person), and they'll stagnate. The problem with an LLM, unlike a person, it that it will hold your hand forever without complaint, while giving unreliable advice.
Getting some results with the help of infinitely-patient GPT may motivate people to learn more, as opposed to losing motivation from getting stuck, having trouble finding right answers without knowing the right terminology, and/or being told off by StackOverflow people that's a homework question.
People who want to grow, can also use GPT to ask for more explanations, and use it as a tutor. It's much better at recalling general advice.
And not everyone may want to grow into a professional developer. GPT is useful to lots of people who are not programmers, and just need to solve programming-adjacent problems, e.g. write a macro to automate a repetitive task, or customize a website.
> ...People who want to grow, can also use GPT to ask for more explanations, and use it as a tutor. It's much better at recalling general advice.
The psychology there doesn't make sense, since the technology simultaneously takes away a big motivation to actually learn how to get the result on your own. It's like giving a kid a calculator and expecting him to use it to learn mental arithmetic. Instead, you actually just removed the motivation for most kids to do so.
I think there's a common, unstated assumption in tech circles that removing "friction" and making things "easier" is always good. It's false.
Also, a lot of what you said feels like a post-hoc rationalization for applying this particular technology as a solution to a particular problem, which is a big problem with discourse around "AI" (just like it was with blockchain). That stuff is just in the air.
> ...and/or being told off by StackOverflow people that's a homework question.
IMHO, that's the one legitimately demotivating thing on your list.