> One thing I've frequently noticed in the rationalist community is the belief that if we all just reason hard enough, we'll reach the same conclusions.
What I'm showing is that results like https://www.sciencedirect.com/science/article/abs/pii/000437... demonstrate that this belief is incorrect. Two people starting with the same priors, same observations, and same views on rationality may do the best they can and come to diametrically opposed conclusions. And a lifetime of discussion may be too little to determine which one is right. Ditto putting all the computers in the world to work on the problem for a lifetime.
Real life is worse. We start with different priors and our experiences include different observations. Which makes stark disagreements even easier than when you start with an ideal situation of identical priors and experiences.
This result should encourage us to have humility about the certainty which can be achieved through rationality. But few rationalists show anything like that form of humility.
b) The solution you're talking about is an update to the network. It's buried back in the network's construction, not directly visible in the network itself. Model "blame" is a thing, but not heavily researched or at all cheap, computationally.
That said, btilly's "getting an approximate probability answer that is within 49% of the real one, is NP hard" isn't exactly true either. That's a description of what it takes for an approximation algorithm to guarantee some factor, i.e. set a worst-case bound. In practice an approximation can still be nearly optimal on average.
I agree with the broader point, though.
Polynomial validation isn't going to help.