How do you know it was what you were after? Like you said, it could be .toLocaleTimeString or .toLocaleString (or something else).
How do you verify that the AI isn't giving you broken/incorrect code? I guess you could check the docs, or run the code yourself, but at that point what's the value add for copilot?
For example I can't draw faces but I can recognize a badly drawn face. If I ask an AI: Please draw me a 35 year old man with receding hair and crooked teeth I can quickly validate the result is fit for purpose. If it is not what I want I can modify the query. I then learn quickly how to prompt the AI to give me what I want.
In the example you give we can assume that the AI has produced a plausible option even if wrong. For example a scenario may be:
# User: Write a comment "Convert the date to the current local for printing"
# Copilot: generates the method 'toLocaleString'
# User: Mouse hover over the method to get the documentation for the 'toLocaleString' method
# User: See that the method produces the wrong output. We realize we don't want the date
# User: Modify the comment to "Convert the date to the current local for printing time only"
# Copilot: generates the method 'toLocaleTimeString'
# User: Yes this is the one I want. Moves on
The key point is you have to know what you want and be able to recognize a correct result. Validating a correct result is often easier than coming up with the correct result. You have multiple strategies to validate the result.Testcases, compilation, code review, documentation, IDE intellisense
This obviously gets harder the larger the amount of code copilot is being asked to generate. But good software engineering practises still stand. Try to keep your functions and modules small and to the point.
But code is not a face: you can't easily judge if it's correct or not, if you could you wouldn't need copilot in the first place, so now you have to trust it's correct and, if it isn't, you need to search for the correct answer anyway.
Their experience matches mine in my use of copilot.
I can critique a great book I couldn't write. I can marvel at John Carmack's early iD code without having been able to come up with it. I can be immensely impressed by what golfers produce for a mundane problem.
I'm not saying this is what copilot produces, but the concept could absolutely be useful, in theory.
Think of it like a snippet engine on steroids. It's a huge value add.
By testing it
>but at that point what's the value add for copilot?
Not having to look up the docs and writing it yourself.
It also bears repeating that Copilot will only get better.