Disclaimer: I think hacker communities have some soul-searching to do here: "trust no one" and "gubbermint bad" enjoy far too cosy an acceptance, in vast disproportion to the reasoning or evidence behind them. Trusting no one isn't feasible, so all we're doing is transferring trust, on the basis on anecdote at best and usually just memes, from organisations with flawed but improvable accountability to organisations with none whatsoever.
The anti-conspiracy-theory camp can read an individual conspiracy theory and identify erroneous assertions and logic, with ease. Similarly, the pro-conspiracy-theory camp can read an individual mainstream news article and identify erroneous assertions and logic, with ease. And then when observing each other's camps (in sufficient qualtities over a long period of time), they each consider the other community to be foolish (and unaware of it) in an aggregate sense. And they're both correct.
The anti-conspiracy-camp is correct in that if one spends any time in the conspiracy community, it is not difficult to observe that many of them clearly and passionately believe things, with absolute certainty, for which there is not sufficient conclusive supporting evidence. Similarly, the pro-conspiracy camp is correct in that if one spends any time in mainstream communities, it is not difficult to observe that many of them clearly and passionately believe things, with absolute certainty, for which there is not sufficient conclusive supporting evidence (aka: axioms).
Members from both communities will take offence (usually "quite" passionately) at some portion of the above, and attempt to rebut the assertion in the standard form:
[rhetoric, narrative, "logic", "common sense", "facts"/axioms/intuitions presented as facts] + [and then therefore we shall conclude...]"
...but there will almost always be a flaw in the respective rebuttals: invalid epistemology.
At least part of the reason these two camps cannot have a productive dialogue and agree to a compromise somewhere "in the middle"...to agree on some things (that which they agree on) and only disagree, explicitly and precisely, on the subset of points where disagreement actually(!!) exists, is because both camps suffer from loose epistemology - a willingness (and often, extreme eagerness) to believe things (that are consistent with their priors) to be True(!!!), without adequate and conclusive supporting evidence. So, the minds then seem to develop a kind of all-or-nothing, total war defence of each respective comprehensive idea they hold (each of which is typically riddled with errors and untruths), and hilarity inevitably ensues.
I think the same argument is also quite applicable to many other realms, politics being perhaps the most obvious.
I wonder...if members of the two camps could come to realize the above, might it diminish the ability of those in power to so easily pit them against each other in a never ending cultural meme war, and in turn free up their minds and time to be able to more closely and skilfully observe and analyze the actions of those in power (who can currently operate largely unmonitored, unanalyzed, and unopposed, who can censor anything that gets too dangerous to their interests with <some semi-plausible reason>...something which is in the best interests both camps, and typically the majority of all peoples regardless of group affiliation? And if we extended this principle even more broadly, across all current hot-button topics in the country, and in the international world, could we maybe usher in an era of more calm, reasoned, cooperation between various parties who disagree on a few specific details, but largely agree (but don't realize it) on the vast majority of issues from the "big picture" perspective?
Everyone is mostly a somewhat crappy model based on time and brain constraints. People can be viewed as a graph of agents collecting information about subjects along with meta information about other agents needed to construct a much larger pool of information.
Conspiracy theorists who come into a discussion with a default disbelieve position on a particular agent are likely to spend much more time finding fault and may actually be more accurate at identifying correct faults the same way you are I are more apt to correctly identify a variety of conspiracy theorists as kooks and find immediate fault with their arguments.
Unfortunately kooks are apt to have a broken model of whom is trustworthy and to have built up a large collection of incorrect facts. A particular challenge is the fact that they are possessed of a large collection of "facts" that they aren't learned enough to have come up with in the first place and don't really have an accurate model of. See the moon landing truthers as a particular example. It's trivial to collect "facts" that require only a small amount of bad understanding to "get" but require a substantial understanding of science to actually refute. Since they can't build an accurate enough model of the world to actually understand they would have to accept the expertise of others as valid in order to correct their model of the world. Having already rejected such there is no hope for them.
These "facts" act like prions corrupting their model. When they see people providing true and valid info they are predisposed to discard that agents information as corrupted because it contradicts prior beliefs. Since their prior beliefs predispose them to believe bad agents and disbelieve good agents their model inevitably gets worse until it is unrecoverable.
As a binary, perhaps. But is the ability for the human mind to alter the manner in which it forms axioms fixed? How would we know?
> Its impractical to construct a usefully large model of the world in the tiny slice of time afforded us without accepting far more uncritically based on prior performance of agent, apparent validity, credentials, social position etc etc and only discarding or doubting when given reason to.
I am not recommending that anyone stops "accepting far more uncritically based on prior performance of agent, apparent validity, credentials, social position etc etc", I am asking people to not label unknown or axiomatic beliefs as ~"known for certain to be be true/false - any new or conflicting information is therefore false"). I recommend using the history of physics or medicine as one's guide when pondering this.
> Everyone is mostly a somewhat crappy model based on time and brain constraints. People can be viewed as a graph of agents collecting information about subjects along with meta information about other agents needed to construct a much larger pool of information.
This is exactly the style of abstract, systems analysis thinking that I believe the world needs far more of.
> Conspiracy theorists who come into a discussion with a default disbelieve position on a particular agent are likely to spend much more time finding fault and may actually be more accurate at identifying correct faults the same way you are I are more apt to correctly identify a variety of conspiracy theorists as kooks and find immediate fault with their arguments.
From an abstract perspective, this same behavior can be seen in all human beings, regardless of their community affiliations - it seems to be innate subconscious heuristics that enable it. The frequency and magnitude may vary per community, and adjusting one's heuristics accordingly is both reasonable and logical, but if one's heuristic is to assume that membership in a group necessarily implies certain things, without exception, then you have opened yourself up to a future of erroneous thinking.
For a specific instance of this at the object level (people who have skin colour different than one's own, and to what degree it should be considered in your decision making), this advice seems easy to understand and uncontroversial. But when you simply change the dimension of categorization (to membership in a group on something other than skin color), might a fundamental change in thinking style occur? And might the mind energetically defend that belief (but also not want to get too deeply into a discussion about just why is it that it so aggressively defends this particular belief, while for others it has little more than indifference. Is there perhaps something "special but unseen" about this one? How might one know the answer to that question?)
> Unfortunately kooks are apt to have a broken model of whom is trustworthy and to have built up a large collection of incorrect facts. A particular challenge is the fact that they are possessed of a large collection of "facts" that they aren't learned enough to have come up with in the first place and don't really have an accurate model of.
Again, what you are saying applies to all human beings, and always has. It is innate. Compare current ~"consensus beliefs" in 'USA 2020', to consensus beliefs held in various countries throughout the world - are there disagreements between groups? Is one group always right, and the other always wrong? By what means does one know the answer, 100% of the time? Or, instead of comparing to other countries, compare to prior periods in US history, and ask the same question.
> See the moon landing truthers as a particular example. It's trivial to collect "facts" that require only a small amount of bad understanding to "get" but require a substantial understanding of science to actually refute. Since they can't build an accurate enough model of the world to actually understand they would have to accept the expertise of others as valid in order to correct their model of the world. Having already rejected such there is no hope for them.
This seems fairly true. So, are we to form certain conclusions based on these facts? Are moon landing truthers a big problem? Are the attributes of this conspiracy representative of the attributes of all ideas from that community, and then via simple logic we know(!) that the proper heuristic to form is to immediately reject all ideas from that community? If that isn't what you're saying, what specific idea is it that you are intending to communicate with this example?
> These "facts" act like prions corrupting their model. When they see people providing true and valid info they are predisposed to discard that agents information as corrupted because it contradicts prior beliefs. Since their prior beliefs predispose them to believe bad agents and disbelieve good agents their model inevitably gets worse until it is unrecoverable.
Do you believe that this is a 100% one way street? Do you hold the belief that literally every single idea in the conspiracy community is 100% wrong, and where there are conflicting ideas, the corresponding theory in the mainstream world is 100% correct (known(!) to be correct, as opposed to axiomatic correctness)? If so, upon what actual evidence does your belief system rest?
All sorts of the (perceived to be) "facts" on many matters, are simply not facts. There are a massive number of questions in the world, for which the answer is literally not known. I am suggesting that people start distinguishing, explicitly, between True, False, and Unknown. I should probably also point out an accompanying and often unrealized idea: certainty is not a pre-requisite for action. Of course we all know this at the abstract level, but let's not forget this at the object level. Abstract knowledge being seemingly inaccessible from object level cognitive processes can be regularly observed, and can sometimes result in extremely harmful outcomes - I am suggesting that we keep such things in the forefront of our minds.
The main question I would like to put to people is simple: are you willing, and able, to differentiate between True, False, and Unknown? Both in a binary "matter of principle" sense, and also as a "to what degree are you able to, with high skill, in constant, reliably consistent, real time operations" sense? (And, consider how you know whether the answers you are giving are actually correct, and not estimates, or intentions?)
And if the answer to the first question is "No", or <downvote>, ask yourself whether choosing wilful ignorance is a good idea, and consider whether there may be some sort of unseen force involved in making this choice - in a programming forum of all places.
EDIT: I would also like to restate an idea from my prior comment, in hopes of catalyzing more in-depth abstract thinking:
>> Members from both communities will take offence (usually "quite" passionately) at some portion of the above, and attempt to rebut the assertion in the standard form:
>> [rhetoric, narrative, "logic", "common sense", "facts"/axioms/intuitions presented as facts] + [and then therefore we shall conclude...]"
>> ...but there will almost always be a flaw in the respective rebuttals: invalid epistemology.
Looking at US politics, there has been some suggestion by progressive libertarian politicians (such as Bill Weld) that the majority of americans are simultaneously tolerant of socially progressive and fiscally conservative principles.
I don't have any statistics around this, but it appears to be consistent with the people I know (mostly in California). My friends that are Democrats are likely to loudly fight for gay rights, but are less loud about government programs getting cut (they may still be vocal about that, just not as loud). My friends that are Republicans are typically worried about government waste and socialism. Sure, some may be pro-life, but that's not their top priority.
This isn't to say there aren't Democrats passionate about fiscal issues (Sanders wing), or Republicans passionate about social issues (religious right), simply that it is plausible these camps are <50% of each party, which could imply there is more agreement and/or tolerance between what the two parties stand for than disagreement.
However, I am interested in the opposite: scenarios where human beings cannot cooperate (reliably, consistently, and at scale), and more fundamentally: where it seems not possible to achieve this cooperation (even though it is in both party's best interests), why is it not possible? Is there an underlying cause, and if so, what is it?
I happen to subscribe to the theory that the problem is innate (in some manner), neurological/psychological in nature, and I think there is plenty of evidence to support this type of theory. As a thought experiment, imagine it like this....let's assume that there are certain things that the human mind simply can not do (the reasons are irrelevant for now), certain ideas that the human mind simply will not compute - full stop.
Now, imagine if one of these ideas was a pre-requisite for achieving reliable, consistent, at-scale cooperation (perhaps not in all instances, as your example demonstrates, but sometimes/often, which seems to be the case). Also imagine that there is another one of these "neurological cannot do's", and this one just so happens to be a pre-requisite for the cognitive processing required for identifying the first one. But because CannotDo #2 cannot execute, CannotDo #1 cannot be discovered.
If this was the case, you would then have yourself a situation where a fundamentally important problem can be identified, that all/most parties support fixing, there appear to be no impassible blocking barriers to fixing it, and yet it cannot ever be achieved. You are stuck in an unsolvable problem, blocked by something that you cannot see.
So now that you have this theory, is it possible to find some real world examples of unusual/illogical human behavior that plausibly matches this theory (which would represent a sanity check, not a proof)?
1. I believe that it is not a productive use of mental energy (for myself, or others)
2. My account is rate-limited on HN, so I have to use the few posts I am allowed per day wisely (and yes, I am aware of the irony in this statement)
3. Due to the nature of how the mind seems to work, I believe this runs the risk of allowing minds who are looking for "an escape" from the above an opportunity to resume normal "all is well, nothing to see here" operations
Rather, I encourage people who are "reckoning about the problems of the world" to spend less time thinking (and arguing) about relatively trivial object-level matters like this (instances of problems), and spend more time in a more abstract, systems analysis and decomposition mindset (what is the nature of these problems), with an end goal of gaining greater understanding of how the system we've built for ourselves to live in actually works (as opposed to our axiomatic beliefs of how it works)...and just how and why it is that it seems to produce so many outcomes (some of them plausibly existential risks) that are counter to the true, innermost desires of most people who are living within the system.
This overall situation seems like a rather large paradox - my wish is that more people could find a way to become curious about it, and approach it with the same engineering mindset (but applied at the abstract level) that we use every day in our respective lines of work.