You can also ask it to explain the subject like you’re 5, which might not feel appropriate when interacting with a human because that can feel burdensome.
All of this is heavily caveated by how dramatically wrong LLMs can be, though, and can be rendered moot if the individual in question is too trusting and/or isn’t aware of the tendency of LLMs to hallucinate, pull from bad training data, or match the wrong patterns.