It can't tell it's wrong, it just reacts in a plausible well to you telling it it's wrong (acknowledging it and giving another explanation). Since ChatGPT is an expansion of GPT-3, those things won't be really solved since being somewhat of a knowledge base is a nice side effect, not the main goal of the model.