We should all know that given a belief about the world, and evidence, Bayes' Theorem describes how to update our beliefs.
But what if we have a network of interrelated beliefs? That's called a Bayesian net, and it turns out that Bayes' Theorem also prescribes a unique answer. However, unfortunately, it turns out that working out that answer is NP-hard.
OK, you say, we can come up with an approximate answer. Sorry, no, coming up with an approximate answer that gets within probability 0.5 - ε, for 0 < ε, is ALSO NP-hard. It is literally true that under the right circumstances a single data point logically should be able to flip our entire world view, and which data point does it is computationally intractible.
Therefore our brains use a bunch of heuristics, with a bunch of known failure modes. You can read all the lesswrong you want. You can read Thinking, Fast and Slow and learn why we fail as we do. But the one thing that we cannot do, no matter how much work or effort we put into it, is have the sheer brainpower required to actually BE rational.
The effort of doing better is still worthwhile. But the goal itself is unachievable.
An NP hard problem, even if it cannot be approximated does not mean the average input cannot be solved efficiently.
Examples:
- An NP hard problem is not sufficient for building crypto.
- Type solving for many programming languages is EXP TIME complete, yet those languages prosper and compile just fine.
Beware the idea of taking a mathematical concept and proof and inducing from it to the world outside the model.
The article is asking why it's so hard to be rational though, i.e. follow a logically-valid set of inferences forward to an unambiguous conclusion. Assuming one of your premises is that correct rationality implies reasoning statistically about a network of interrelated beliefs, the uncomputability of a Bayesian net is relevant to that.
However, in practice, complex Bayesian nets do wind up being computationally intractable. Therefore attempts to build real world machine learning systems consistently find themselves going to computationally tractable heuristic methods with rather obvious failure modes.
I think using chaos theory / Bayesian concepts is a significantly better metaphor for "life as we experience it" than it is for the examples you gave.
The recommendation of the theory is if you cant be rational about a specific problem pick another problem, preferably a simpler problem.
Unfortunately lots of chimps in the troupe are incapable of doing that and therefore we shall always have drama.
The NP demonstrations that, in theory, updating a Bayesian is a computationally infeasible problem was G. F. Cooper in 1990 (for Bayesian Networks). The stronger result that approximating the update is also computationally infeasible was Dagum, P. & Luby, M., 1993.
So Simon's work relates to what I said, but isn't based on it.
It's a model not a fact. As a model, it can't really be correct only more or less accurate.
Even beyond the hard process bottleneck on creating or lucking upon events that produce the evidence we need, however, there is also the limitation that Bayes only gives you a probability. It doesn't give you a decision theory or even a thresholding function. For those, you need a whole lot of other things like utility functions, discount rates, receiver operating characteristics and an understanding of asymmetric costs of false positives versus false negatives, that are often different for each decision domain.
And, of course, to get a utility function meaningful for humans, you need values. There is no algorithm that can give you values. They're just there as a basic primitive input to all other decision making procedures, yet they often conflict in ways that cannot be reconciled even within a single person, let alone across a society of many people.
I do believe this is zero-sum in that improving on one set of decisions means no applying the same rigor to others.
This is often seen in the form of very smart people also believing conspiracy theories or throwing their hands up around other massive issues. As an example, the "Rationalist crowd" has de-emphasized work on climate change mitigation in favor of more abstract work on AI safety.
Why is that "the" goal?
Who sets "the" goal?
Take a non political example: How safe are whole tomatoes to eat? What did the grocery store spray on them? Is it safe? Will it wash off? What about the warehouse were they were stored for months, what did they put on them to keep them from spoiling? What about the farmer, what did they spray on them to protect against pests? What is in the water, is it safe? Now we're ready to eat: Does anyone in my family have any kind of intolerance to raw tomatoes? And this is a pretty simple toy example.... In general, we've collectively decided to trust in the good in people. We hope that if something is bad/lie/harmful, then someone in the know will raise the alarm for the group.
The goal of rational thinking is not some conceit of perfection [1] but debugging the runtime for a better result. Humans are in fact very good at communication and at debugging language errors. They have evolved a rational capacity. It can evidently be developed but it needs to be exercised.
This is where hypothesis of an educational system often enters the discussion.
[1] Galef and others call the "Star Trek" Spock character a Vulcan Strawman or Straw Vulcan. https://en.wikipedia.org/wiki/Julia_Galef
I further maintain that it's definitionally impossible. Before we find it computationally impossible, we will find that we can't write the a complete, detailed requirements specification defining what rational is.
(Of course, we recognize egregious irrationality when we see it; that's not what I mean; you can't just define rationality as the opposite of that.)
People can behave rationally (or not) with respect to some stated values that they have. But those can be arbitrary. So the requirement specification for rationality has to refer to a "configuration space", so to speak, where we program these values. This means that the output is dependent on it; we can't write some absolute test case for rationality that doesn't include this.
Problem is, people with different values look at each other's values and point to them and say, "those values are irrational; those people should adopt my values instead".
Luckily we get our values from bunch of heuristics developed through millions of years of biological and social evolution, so we mostly have the same ones, just with different relative weights.
Won't be true if we ever meet (or make) some other sentient critters.
Not all questions have answers, if you want to be rational when you are asked to answer those question, you can just say "I dont know"
At the beginning of the pandemic, when politics were saying mask dont work. You could just say, well, if we transmit covid by air, then putting something in front of my mouth is going to decrease the spread. That's what is being rational. Of course that's not going to be all the time the good answer, but you have still thought rationally.
I'm not really sure what you are trying to prove. Of curse being rational is possible. All people are rational for most of their decisions.
If I squint at a statement like this, I guess it could be called rational, but it is certainly not rigorous or convincing. You brush over too much and are making lots of assumptions.
Are these statements rational?
The sun is warm, so if I climb a ladder I will be closer to the sun and therefore warmer.
Masks impede airflow, so if I wear a mask I will suffocate.
Bleach kills germs, so drinking bleach will make me healthier.
It is very easy to make an incorrect idea seem rational. You should wear masks because rigorous science tells us that it is effective. That is the only valid justification. "Common sense" is used to justify a lot of junk science.
Are straw-man statements rational?
"Then there is the infamous mask issue. Epidemiologists have taken a lot of heat on this question in particular. Until well into March 2020, I was skeptical about the benefit of everyone wearing face masks. That skepticism was based on previous scientific research as well as hypotheses about how covid was transmitted that turned out to be wrong. Mask-wearing has been a common practice in Asia for decades, to protect against air pollution and to prevent transmitting infection to others when sick. Mask-wearing for protection against catching an infection became widespread in Asia following the 2003 SARS outbreak, but scientific evidence on the effectiveness of this strategy was limited.
"Before the coronavirus pandemic, most research on face masks for respiratory diseases came from two types of studies: clinical settings with very sick patients, and community settings during normal flu seasons. In clinical settings, it was clear that well-fitting, high-quality face masks, such as the N95 variety, were important protective equipment for doctors and nurses against viruses that can be transmitted via droplets or smaller aerosol particles. But these studies also suggested careful training was required to ensure that masks didn’t get contaminated when surface transmission was possible, as is the case with SARS. Community-level evidence about mask-wearing was much less compelling. Most studies showed little to no benefit to mask-wearing in the case of the flu, for instance. Studies that have suggested a benefit of mask-wearing were generally those in which people with symptoms wore masks — so that was the advice I embraced for the coronavirus, too.
"I also, like many other epidemiologists, overestimated how readily the novel coronavirus would spread on surfaces — and this affected our view of masks. Early data showed that, like SARS, the coronavirus could persist on surfaces for hours to days, and so I was initially concerned that face masks, especially ill-fitting, homemade or carelessly worn coverings could become contaminated with transmissible virus. In fact, I worried that this might mean wearing face masks could be worse than not wearing them. This was wrong. Surface transmission, it emerged, is not that big a problem for covid, but transmission through air via aerosols is a big source of transmission. And so it turns out that face masks do work in this case.
"I changed my mind on masks in March 2020, as testing capacity increased and it became clear how common asymptomatic and pre-symptomatic infection were (since aerosols were the likely vector). I wish that I and others had caught on sooner — and better testing early on might have caused an earlier revision of views — but there was no bad faith involved."
"I’m an epidemiologist. Here’s what I got wrong about covid."(https://www.washingtonpost.com/outlook/2021/04/20/epidemiolo...)
This is a serious question. We should always challenge our preconceptions. To take your examples:
1. Traditional Judeo-Christian religions all claim we should believe because of claims made in holy books of questionable provenance, held by primitive people who believed things like (for example) disease being caused by demons. What rational reason is there for believing these holy books to be particularly truthful? (I was careful to not include Buddhism, whose basis is in experiences that people have while in altered states of consciousness from meditation.)
2. The shortcomings of libertarianism involve various tragedies of the commons. (My favorite book on this being, The Logic of Collective Action.) However the evidence in favor of most government interventions is rather weak. And the evidence is very strong that well-intended government interventions predictably will, after regulatory capture, wind creating severe problems of their own. How do you know that the interventions which you like will actually lead to good results? (Note, both major US parties are uneasy coalitions of convenience kept together through the only electoral realities of winner takes all. On the left, big labor and environmentalism are also uncomfortable bedfellows.)
3. To the extent that the observer is described by quantum mechanics, many-worlds is provably a correct description of the process of observation. In the absence of concrete evidence that quantum mechanics breaks down for observers like us, what rational reason is there to advocate for any other interpretation? (The fact that it completely violates our preconceptions about how the world should work is an emotional argument, not a rational one.)
Even when SA himself eventually started questioning his response/allegations, few of the mob (there really is no other word for it) would not have it. All absolutist and conspiracy laden.
PG said keep your identity small. I’ve found few rationalist or libertarians of any bent who meet that criteria.
See also Gigerenzer's Ecological Rationality.
This seems to be a pretty good overview:
The question is, can we organize and educate ourselves so we can leverage that parallel power and let each person become experts in their areas with proper trusts and incentives? And manage to pass along the previous generation computation to the next, without corrupting the data?
Edit: And I forgot all the tools we've designed to help us compute all that, of which I'd count math as a tool to help us compute, and computers as another.
I'd go further to say that there are real world issues that compound the variables. Namely that individual actions increasingly have global consequences eg. individual purchasing behaviors have externalities that the market is not pricing in and thus fall to the consumer to have to calculate.
Further, given that these global issues these kinds of calculations are game theoretic by their nature, making it even more complicated.
Should we? What of the problem of induction?
And neither logic nor mathematics offers a solution. In practice, however, we do. But, as any parent should know, we don't do it through a rational process.
One thing I'll add that drives me nuts is the fetishization of bayesian reasoning I see some times here on HN. There are times that bayesian reasoning is helpful and times that it isn't. Specifically, when you don't trust your model, bayes rule can mislead you badly (frequently when it comes to missing/counterfactual data). It's just a tool. There are others. It makes me crazy when it's someone's only hammer, so everything starts to look like a nail. Sometimes, more appropriate tools leave you without an answer.
Apparently that's not something we're willing to live with.
I like to tell people that charts work better for asking questions than answering them. Once people know you look for answers there, the data changes. More so than they do for question asking (people will try to smooth the data to avoid awkward questions).
Learning about logical fallacies and identifying them in conversations is great. Don't tell the counterparty of their logical fallacies in conversations cause that's off putting. Just note them internally for a more rational inner dialogue.
Learning other languages and cultures is another way to learn about how different societies interact with objective truth. Living other places taught me a lot about how denial works in different places.
Thinking rationally is quite hard and I've learned how to abandon it in a lot of situations in favor of human emotions. How someone feels is more important than how they should feel.
This also had a rather frustrating effect. It is true that not just intensely traveling (not in the sight seeing way), but also actual living in several different countries and cultures, changed my horizon a lot. It definitely had the effect you talk about.
But then what? You cannot tell your partner in discussion "you would not think like that if you had traveled/lived outside of your culture", and it's also impossible to send everyone off to travel in order to experience the same. Much less in the US, where for most of the country you cannot just hop into a train for a few hours to encounter a completely different language and culture. (I grew up in Europe and moved to the US as an adult, but I've also lived in several different European countries before, and traveled to far away places like Asia.)
I see it everywhere, from my own decision making process to international politics. Just this morning I was thinking about it as I read the news about the US leaving Afghanistan, and last week talking with a friend who is staying at a bad job.
And here's the answer: Persistence is good when it is successful. If the activity us unsuccessful, it's an example of the irrational sunk cost fallacy. (Making decisions without knowledge of future events is quite hard.)
And the important lesson: If you bail at the first sign of adversity, no one can ever accuse you of being irrational. Of course, as the old saying goes, all progress is made due to the irrational.
It's not an easy task. But 10 minutes a day can add up and reinforce that information.
A related idea is cognitive distortion. It's basically an irrational thought pattern that perpetuates negative emotions and a distorted view of reality. One example many here can relate to is imposter syndrome. But to feel like an imposter you have to overlook your achievements and assets and cherry-pick negative data points.
Can you elaborate on that?
This really piqued my interest. I feel like logic is easy to apply retrospectively (especially so for spotting fallacies), but trying to catch myself in a fallacy in the present feels like excessive second quessing and overanalyzing. The sort that prevents forward momentum and learning.
Would you by any change have any recommendations on reading on the topic?
As far as arguments go "That's an XXX fallacy" is one of the weaker ones, if not fallacious in and of itself.
We hope.
Big business want people to buy things they don't need, with money they don't have to impress people they don't like
Politicians want people who will drink the cool-aid and follow what they (the politicians) say (and not what they do)
Religions... well, same.
And so all messages from advertisement, to movies, TV, narrative is about hijacking people's feelings and suppressing rationality. Common sense is no longer common, and doesn't make much sense.
They are rejecting the authorities that in the past have tried to associate themselves with "rationality". The political think tanks. The seminaries. The universities. Government agencies. Capitalist CEOs following the "invisible hand" of the market.
All of these so-called elites have biases and agendas, so of course none of them should be accepted at face value.
I think what's missed, is rationality is not about trusting people and organizations, but about trusting a process. Trusting debates over lectures. Trusting well designed studies over trusting scientists. Trusting free speech and examining a broad range of ideas over speech codes and censorship. Trusting empirical observation over ideological purity.
This is the value system of the so called "classical liberals", and they are an ever more lonely and isolated group. There is a growing embrace for authoritarianism and defense of tribal identity on both the "left" and the "right" taking its place.
We know you're talking about other engineers, and we agree about those fools!
Also, you know how software engineers like to think that they're rocket scientists? Well, it brings me no pleasure to report that rocket scientists think they're software engineers.
Rationality, to me, is really about an open-minded approach to beliefs. Allowing multiple beliefs to overlap, to compete, to adapt, without interfering too much with the process.
If you want to be rational about an opinion, you have to think first, "what are my hypothesis". Most people start with the opinion and then go down to the hypothesis. That can't work like that. That's the hypothesis + the logic that should create an opinion. Not the other way around
This doesn't seem very rational. If your beliefs are in conflict and you're content to not resolve that, then pretty much by definition you're accepting a logical inconsistency.
If resolving the intersection doesn't lead to a new stable belief system, then aren't you basically going with "whatever I'm feeling that day"?
However, the drive for total and pure consistency is also misguided in my judgement. One reason why we usually feel so motivated and conflicted (to the point where it can lead to depression) with inconsistency is the psychological effect of cognitive dissonance. It's not clear to me that the only way to quieten cognitive dissonance is to resolve the dissenting thoughts.
Another way is to accept that not everything needs to be resolved. This can be great for mental health - again, just in my experience. Don't let the (sometimes irrational) effects of cognitive dissonance override your decision making. Resolution can work, but so can acceptance.
This is just my perspective, but very few beliefs or values map to the whole of reality...they tend to bind to certain aspects of it with a variable priority along the spectrum of that particular dimension, wither its personal agency, the color red, public health, spiders, etc.
However, reality rarely provides us with the ability take a position purely on one factor...nearly every context in which a decision is required operates at the nexus of an uncountable number of these dimensions. Some you can feel swelling to the fore as their slope in your mental 'values' model increases, others stay dormant because you don't see how they apply. This is how most of my decisions that might look outwardly 'inconsistent' arise, there are confounding factors that dominate the topology and steer me in a different direction.
And, also, sometimes you think you've settled on the right path, but then you later get a new piece of information and have to reevaluate.
So to me it's not so cut and dry.
Dealing with contradictions in our own beliefs (paradoxes) is a part of life. The rational approach is to accept that and "fuse" those beliefs carefully, not (a) accept one and reject the others or (b) avoid the topic entirely.
Focus on yourself and controlling your emotions. Be the calm.
The real interesting thing here, is the answer to why emotions, work as they do and what the patterns and bits are that trigger them. To turn over that particular rock is to go to some deeply disturbing places. And to loose the illusion that emotion make one more "human" - meanwhile, if ones reaction is more hard coded, shouldn't it be considered more machine-like?
President Dwight D. Eisenhower put it succinctly in his farewell address to the nation:
"The prospect of domination of the nation's scholars by Federal employment, project allocations, and the power of money is ever present and is gravely to be regarded. Yet, in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific technological elite."
In meditation, a common teaching is to examine an object for a long period, really just stare at it and allow your mind to focus on it fully. I see a coffee mug, it has a handle and writing on it, it's off-white and has little coffee stains. This descriptive mind goes a mile-a-minute normally, but eventually you can break through that and realize, this is just a collection of atoms, this is something reflecting photons and pushing back electrically against my skins' atoms. Even deeper, it's just part of the environment, all the things I can notice, like everything else we care about.
Such exercises can help reveal the nature of mind. There are many layers of this onion, and many separate onions vying for our attention at once. Rationality relies upon peeling back these superficial layers of the thought onion to get towards "the truth." That means peeling back biases, emotions, hunches, instincts, and all the little mental heuristics that are nice "shortcuts" for a biologically limited thinker.
But outside our minds, how is there any rationality left? It feels like another program or heuristic we use to make decisions to help us survive and reproduce.
This matches my observations, too.
> Cowen suggested that to understand reality you must not just read about it but see it firsthand; he has grounded his priors in visits to about a hundred countries, once getting caught in a shoot-out between a Brazilian drug gang and the police.
I don't mind if part of his motivation is to impress others, or if it's wasteful, etc. Why would his motivations have to be pure for it to be meaningful for him?
But you’re not going to learn the same things you would from travel. For example, you’re not likely to learn another language if everyone you talk to speaks English. Similarly for learning about other cultures that aren’t near you.
But I’m not sure how much brief travel to see the tourist sites helps, and hanging out with expats might not help so much.
If our ancestors would have made the rational assessment that there is unlikely to be a predator hiding behind the bush, that would have worked only as long as it worked, until one day they got eaten.
Irrationally overestimating threats and risks is not an optimal approach, but as long as you can survive it can be a long-term optimal approach.
Humans using irrational stories to enable group cohesion and coordination are similarly irrational but intrinsic ways of being that also provide an evolutionary advantage.
Rationality, however is an incredible optimization tool when operating in domains that are well understood, like the example of stereo equipment that the author gave in the article. It can also help in the process of expanding knowledge by helping a systematically compare and contrast signals.
But it doesn't prevent the lion from eating you or the religious or temporal authority from ostracizing you from the safety of the settlement, and it may even make both of those outcomes more likely.
That wouldn't have been a rational assessment, because it wouldn't have been an accurate assessment of the risks of being wrong, and the behavior required to avoid them.
If there's only a 1% chance that a predator is behind a bush, and that predator might eat you, it's absolutely rational to act as though there is a predator. You'll be seeing lots of bushes in your life, and you can't escape from those 1% chances for long.
The same thinking is why it would have been rational to try and avoid global warming 30 years ago. Even if the science was not settled, in the worst-case scenario, you'd have "wasted" a bunch of money making green energy production. In the best-case scenario, you saved the planet.
Avoidance of all possible risk is a recipe for paralysis. Part of being rational is evaluation of risks vs rewards as well as recognizing the dangers of unintended consequences and the fact that nearly all meaningful decisions are made with incomplete information and time limits.
In the past, it is a rational concern to be worried about being jumped by a predator from behind a bush, and if you don’t know if or if not there is a predator, it is perfectly rational to be worried about such a concern!
Same with diseases and causes when you don’t know what is causing them, etc.
It’s a tendency to dismiss older concerns from a time when there was a severe lack of information as irrational, where when you know your limits and see the results, there is no other rational way to behave except to be concerned or avoid those things. While also not rational to believe clearly contradictory religious dogma that covers the topic, it is rational to follow or support it when it has clear alignment with visibly effective methods encoded in it for avoiding disease and other problems.
I think we agree, but I also think you are using "rational" here in the colloquial sense to mean the "smartest" thing to do.
The article, and my comment in response, uses the traditional definition of "rational" as something derived from logic, and not from impulse or instinct.
The two definitions are not the same (not that one is better than the other, they just mean different things).
In Bayesian decision theory, you'd choose the action (walk directly by the bush; walk by the bush but have your guard up; steer clear of the bush) that minimizes your loss function (e.g. probability of dying or probability of your blood line dying out). You'd end up picking a path that balances the risk of being eaten by a lion with the cost of having to walk further (and thus having less time and energy to gather food; or tripping and cutting yourself and dying of infection; or whatever).
Simple example:
Let's say the same pair of shoes is available in two different shops, but in one shop it's more expensive. It seem more rational to buy it in the cheaper shop. However, what if you've heard that the cheaper shop is very unethical in how it conducts the business. Is it still more rational to buy the shoes there?
And then you might also start considering this situation "in the grand scheme of things" - in the grand scheme of things does it make any difference if I buy it in shop A or B?
And at which point does it become irrational to be overthinking simple things in order to try to be rational? What if trying to always be rational is stressing you out, and turns out to be worse in the long run?
If consumer ethics is important to you then it obviously warrants some deliberation, weighted by an upper bound of your potential impact. But identifying areas of meaningless choice and simply choosing randomly (and not even caring if the choice is sufficiently random) frees up a lot of mental energy.
Some will say that buying from Amazon simply perpetuates Amazon... but Amazon is so large at this point that it doesn't matter WHAT I do. So ultimately, is the world better off with my two donations from my Amazon purchase or giving my money away for the same product to ShoeCo?
If your donations have some tiny bit of meaning to them, then removing a tiny bit of business from Amazon and paying your local shopkeeper probably also has meaning.
There's a YouTube channel (1) called Street Epistemology which has a guy interview members of the public and ask them if they have a belief they hold to be true such as "the supernatural exists" or "climate change is real" or "x is better than y".
He then asks them to estimate how certain they are that it's true.
Then they talk. The interviewer asks a question and makes notes, then tries to summarise the reply. He questions how they know what they think they know and at the end he asks them to again say how confident they are that what they said is true.
It's fascinating to see people actually talk about and discuss what are usually unsaid thoughts and it shows some glaring biases logical fallacies.
Thanks for correcting me. I will refrain from ever using virii again!
Exactly what you said. Once you accept one toxic thought, it tends to branch out into other decisions. Unfortunately there are many, many memes out there ready to cause an infection.
These things can be fatal.
Source: https://literatureandhistory.com/index.php/episode-032-trees...
Now, to be more generous, I will assume that people are actually criticizing how "institutions impose a mainstream view that is difficult to replaced even when facts say it should". To that I say: fine. But even in this case, there should be enough resources to form a rational opinion over the matter (with probabilistic reasoning). Hell, I have a lot of non-orthodox opinions that are so out of Overton Window that I rarely can discuss them. And even in these cases, the internet and Google Scholar/Sci-hub were sources that helped me explore it.
So, I have no sympathy for this "institutions lied to us, let me believe now whatever I want" bullshit.
I know a guy who hates foo (using a place holder). In fact he's downright foophobic. He is pretty convinced he has a natural unbiased hate of foo and is being rational when he expresses it.
To me as an outsider it is pretty obvious that his hate of foo is the result of cultural conditioning. To him it is perfectly rational to hate foo and to me it is totally irrational, especially since he can't give any concrete reason for it.
So who is right and who is being rational?
It could be that, like dietary restrictions to reduce the spread of disease, the foophobia is no longer needed, but keep Chesterton's fence in mind before you say it's unneeded.
I think part of the problem is that most people are conditioned into many beliefs from a young age
I think it's irrational to not consider new information when processed. So, again, this depends on what foo is. If it is obeying speed limits even when no one else is on the road, and your friend learns the penalties for not obeying road signs when they get their license, they would probably find it irrational to not do the speed limit, even if they hate it. They wouldn't want to risk the fines, license suspension, etc.
However, let's say your friend's brother has stronger beliefs and can afford any fines and legal action. He could think about it and still decide that it's rational to not obey the speed limit. This doesn't make it right; I think right and rational are mutually exclusive.
For example: Throw salt over your shoulder if you spill some -or- Green skinned people are bad and you should never trust them or allow them in your neighborhood.
Now the former is pretty harmless but not so the latter. In both cases the only explanation is "that's how I was raised" which I don't find compelling or rational.
I like chocolate ice cream more than vanilla ice cream, and you're not gonna convince me otherwise by debating the flavor with me. It entirely could be the case that my preference is from cultural conditioning, but it's not my concern.
If your friend has a mindset of "to each his own" there's no problem.
...is a pretty silly phrase. If you don't have a reason for something, it can't (by definition) be reasonable.
In my experience, people usually can give ‘concrete’ reasons for it, but what constitutes ‘concrete’ is a matter of opinion, and I don’t consider everybody’s reasons to be valid. But of course, they do.
1) It’s not reasonable to expect someone to dig so deeply, and there isn’t enough time to do it for every issue.
2) Someone, somewhere, has done an even deeper dive into the same issue. From their perspective, I’m the one that hasn’t done my research. When it’s “enough” is a fuzzy line.
Virtually every political disagreement is based on values, though most of the time people dont recognize it.
Values determine priorities and priorities underpin action.
For example some people feel that liberty (e.g. choice) is more important than saving lives when it comes to vaccines.
Some people feel that economic efficiency is less important than reducing suffering.
Some people feel that the life of an unborn child is worth less than the ability to choose whether to have that child
Even in the article, is a stereo that sounds better actually better than a stereo that looks better? That is a value judgement and there is no right or wrong.
No one is actually wrong since everything is value judgements. Many people believe in universal view of ethics/morality. There is almost no universal set of ethics/morality if you look across space and time.
However some values allow a culture to out compete other cultures causing the "inferior" values to disappear. New mutations are constantly being created. Most are neutral and have no impact on societal survival. Some are negative and some are positive.
Take money for example. You can create a theoretical decision-making dilemma involving certain sums of money, and work out what the most rational strategy is, but in reality, the differences between different sums of money is going to differ between people depending on different value systems and competing interests. So then you get into this scenario where 1 unit of money means something different to different people (the value you put on 1 € is going to be different from the value I put on it; the exchange rates are sort of an average over all these valuations), which might throw off the relevance of the theoretical scenario for reality, or change the optimal decision scenario.
The other issue beside the one you're relating to -- the subjectivity of the weights assigned to different outcomes, the achille's heel of utility theory -- is uncertainty not just about the values in the model, but whether the model is even correct at all. That is, you can create some idea that some course of action is more rational, but what happens when there's some nontrivial probability that the whole framework is incorrect? Your decision about A and B, then, shouldn't just be modeled in terms of whatever is in your model, but all the other things you're not accounting for. Maybe there are other decisions, C and D, which you're not even aware of, or someone else is, but you have to choose B to get to them.
Just yesterday I read this very well-reasoned, elegant, rational explanation by an epidemiologist about why boosters aren't needed. But about 3/4 of the way through I realized it was all based on an assumption that is very suspect, and which throws everything out the window. There are still other things their arguments were missing. So by the end of it I was convinced of the opposite conclusion.
Rationality as a framework is important, but it's limited and often misleading.
Disagree; value systems are the inputs to rationality. The only constraint is that you do the introspection in order to know what it is that you value. In that sense buying a stereo based on appearance is the right decision if you seek status among peers or appreciate aesthetics. It's the wrong decision if you want sound quality or durability.
I think the real issue is that people don't do the necessary introspection, and instead just glom onto catch-phrases or follow someone else's lead. That's why so many people hold political views that are contrary to their own interests.
That we are able to think somewhat rational-ish is only because we adapted by adopting extensive modeling simulations. The fundamental function of these simulations is to simulate other beings, primarily human. And in that our brainware is lazy as hell, because - to quote evolution; why do perfect, when you can do good enough? Saves a ton of energy.
The wetware we employ was never expected to rationally solve differential equations or do proper statistical analysis. At best it was expected to guess the parabola of a thrown stone or spear, or empate the best way to mate without facing repercussions from the tribe.
So, really. It's not that thinking is hard. It's just that we're just not equipped to do it.
"Confirmation Bias" does not quite capture it. Really just laziness. :)
The other part, being decisive... I can definitely relate to that. I noticed that I often have a hard time making decisions and realized it's because I tend look at the world in terms of what I can possibly lose instead of looking at something new in terms of excitement.
I would argue we've largely been anesthetized due to successful Gish Galloping. I have great admiration for people who put the effort in to sort out the issues, academics and journalists. But just now everyone eye-rolled when I said those two terms.
Here's an 20m audio interview[0] with the author of "The Scout Mindset: Why Some People See Things Clearly and Others Don’t"
It very well summarizes the way I like to gather information in an area so that I can form an opinion and direction of movement on a problem.
Early on during the pandemic (the first half of February 2020) the people writing on Twitter about covid in China were being labeled as conspiracy nuts, with some of them outright having their accounts suspended by Twitter. Covid/coronavirus was (I think purposefully) kept out of the trending charts on Twitter, the Oscars were seen as much more important.
And these are only two recent examples that came to my mind where the "rational" parts of our society (the experts and the media) failed completely, as such it's only rational not to trust these pseudo-rational entities anymore. Imo I think in a way the post-modernists were right, (almost) everything is negotiable or a social construct, there's no true or false, apart from death, I would say.
Being self-aware I've only started learning post college and is something I wish I was taught more growing up. As a child I was always informed that I should do x and y because that's what you're supposed to do! Only now as an adult I'm taking the time to slowly ponder and analyze myself and be more strategic with my future goals.
Side note. Really enjoyed the audio version of this long form article
I think a lot of political disagreements aren't really about logical arguments at all, but rather differences in opinion over relative priority of some ideals that are all important. There isn't always an objective right answer.
Reason needs axioms (beliefs) to build a rational discourse, and without emotions, it is impossible to choose a limited set of starting axioms to begin making logical inferences from.
I agree with the person above who said being rational is about making post-hoc rationalizations. We know by cognitive science that a majority of explanations are build that way: after observing facts, we intuitively develop a story that is consistent with our expectations about the fact, as well as with our preconceived beliefs. "Being rational" in this context would be limited to reviewing our beliefs when these ad-hoc rationalizations become inconsistent one with another.
I highly recommend reading it. I found it extremely clarifying as a working scientist/engineer and someone who has been persistently nagged by the deification of rationality.
The OP even uses the term “metarational” (though used to mean something different), which made me surprised when “The Eggplant” was not mentioned.
This approach even favors the most informed and trained (the "best" being preferable to the "better"), offering an even more difficult challenge.
Indirect democracy replaces rationality with ill-formed trust.
> Greg...became a director at a hedge fund. His net worth is now several thousand times my own.
So many distractions. Wind, rain, bees, rampant squirrels.
And what makes that game more interesting than a squirrel anyway?
(And you're playing the game agaonst the squirrels anyway.)
Rationality is a form of communication. Its purpose to persuade other people and coordinate group activity, e.g. hunters deciding where they should hunt and making arguments about where the prey might be. In that setting, rationality works perfectly well because humans are quite good at detecting bad reasoning when they see it in others.
Because of the assumptions of psychological individualism, rationality is misunderstood as a type of cognition that guides an individual's actions. To a certain extent, this is a valid approach because incentives within organizations encourage people to act this way. We reward individual accomplishments more than collaboration.
But many cognitive biases disappear when you aren't working under the assumptions of psychological individualism. For example, in the artificial limitations of a lab, you can show that people are unduly influenced by irrelevant factors when making purchase decisions. But in reality, when a salesperson is influencing someone to spend too much on a car, people say things like "Let me talk it over with my wife."
We instinctively seek out an environment of social communication and collaboration where rationality can operate. Much of the advice about how to be individually rational comes down to simulating those conditions within your own mind, like scrutinizing your own thinking as if it was an argument being made by another person. That can work, but the vast majority of people adopt a more straightforward approach, which is to simply use rationality as it was designed to be used.
Rationality is hard, but only for a small number of "smart people" who live in an individualistic culture prevents them from using it in the optimal way.
I think the hardest bit of this is in some ways the middle, wanting things. How do we know we really want what we want, and how do we know what will make us happy. That’s the bit I struggle with anyway.
Not being rational - and instead being based on guts - has an evolutionary advantage (it cuts through the noise, which, in the past could be a life or death situation).