Nope, those are very different. Fundamentally the reason that LLMs hallucinate is because they are predicting the next most likely word. It would be expensive to find a way to remove outputs where P truth is low, but it's not completely implausible like removing software bugs.