It's interesting to draw parallels between the way you describe it and the way more general large language models (LLMs, of which Copilot is in a sense, a specialized instance, applied to code, instead of general language) operate: they also always "know" how to answer any specific question, or how to complete any prompt, without any exception. A model which would be able to "show restraint", and "know when it doesn't know", would be a really impressive improvement to this technology in my opinion.
There are language models that have an internal search engine, they can copy/verify the facts they generate from the source. They are also easier to update, just refresh the search engine. Now you have to provide a collection of "true facts".