I don't have the training data so I can't say for sure, but I'm assuming here that the training data is a lot of "valid question" => "valid answer", without many examples of "nonsense question" => "i don't understand"/"that's nonsense"
Edit: I want to add that expressing confidence is not the same as answering the prompt. If I ask somebody "give me a drawing depicting Obama's son", and they said "Obama doesn't have a son", I explicitly asked for a drawing and they are giving a speech response. I believe this kind of indirect response has to be taught, and can't be expected to come out naturally from a model that has only been trained on giving out direct responses.