I’ve seen GPT 3 and 4 hallucinate the most amazing commentary about their own output. Maybe we will get dependable, out of process, guidance at some point about how factual the model thinks it is on an output per output basis but until that point you should trust every LOC and comment exactly the same as code gifted to you by an adversary.
My modest 2¢