You can influence it so easily with your inputs as well. You could easily, accidentally point it's search toward the searches someone is more likely to be aligned with, but may not actually be fact.
Especially as AI providers implement long term memory or reflections on historic chats, your bias will very strongly influence outcomes of fact checking.
You probably mean things where the truth is widely known and you should have no trouble sorting it out if you put a little effort in. TFA has nearly zero information about what sorts of things they're talking about here, and clicking around the website I found only a _little_ more info.
I didn’t cherry pick it because obviously it’s not that impressive. But it works - it is professional and gives as much information as it can be confident giving.
You said you can influence it etc but clearly it is hard. Sure it is not perfect but I find that it helps reducing fake news if anything.
You are right that for things that are more subtle you can lead it to certain answers. But we must acknowledge that debates that are not fully solved and have subtleties are not exactly relevant to “spreading misinformation”.
I think that is good in general, I do, I am just weary that it is still shakey ground. The company behind Grok has clear incentives to influence the output of those fact checks as one example. I am also weary of it always eventually conforming to the vox populi as it ingests the internet firehouse. Leading to the loudest, most prolific voices thus views always being more represented.