(Though at some point, maybe the 2nd half of the book, drags on and you can skip most of those chapters. If you don't have time for that, I'm sure chat GPT can give you a taste of the main premises and you can probe deeper from there.)
It’s still very much worth reading in its own right, but now implicitly comes bundled with a game I like to call “calibrate yourself on the replication crisis”. Playing is simple: every time the book mentions a surprising result, try to guess whether it replicated. Then search online to see if you got it right.
You can ignore anything said in chapter 4 about priming for example.
See https://replicationindex.com/2020/12/30/a-meta-scientific-pe... for more.
Irony is, Kahneman had himself written a paper warning about generalizing from studies with small sample sizes:
"Suppose you have run an experiment on 20 subjects, and have obtained a significant re- sult which confirms your theory (z = 2.23, p < .05, two-tailed). You now have cause to run an additional group of 10 subjects. What do you think the probability is that the results will be significant, by a one-tailed test, separately for this group?"
"Apparently, most psychologists have an exaggerated belief in the likelihood of successfully replicating an obtained finding. The sources of such beliefs, and their consequences for the conduct of scientific inquiry, are what this paper is about."
Then 40 years later, he fell into the same trap. He became one of the "most psychologists".
http://stats.org.uk/statistical-inference/TverskyKahneman197...
I imagine that in 30 years, it will become clear that individual humans display enormous diversity, their diversity increasing as societal norms relax, and their behavior changing as the culture around them change. As such, replication is hopeless and trying to turn "psychology" into a science was a futile endeavor.
That is not to say that psychology cannot be helpful, just that we cannot infer rational conclusions or predictions from it the same way we can from hard sciences.
Self help books are enormously helpful, but they're definitely not science either.
I don't think we - people used to STEM - appreciate how difficult behavioral psychology is. In STEM, we are used to isolating experiments down, so there are as few variables as possible. And we are used to well-designed experiments being reproducible if everyone does what they are supposed to do right.
In the study of human behavior there are always countless uncontrollable variables, every human is a bit different and it is very difficult to discover something that would apply generally. Also, pretty much all of the research is done on western population of European descent.
This is why I take all behavioral claims with a large grain of salt, but I still have respect for the researchers doing their best in the field.
Beyond this specific issue, are psychology experiments and issues time and culture sensitive? I think so [1]
[1] https://www.ssoar.info/ssoar/bitstream/handle/document/42104...
My impression is that the priming chapter is bunk, but the rest has generally held up. Is that no longer true?
* 1st third of the book: Lays out the basic ideas, gives several examples
* 2nd third of the book: More examples that repeat the themes from the 1st part
* 3rd third of the book: ??? I usually give up at this point
I sometimes wish that more books were like "The Mom Test" - just long enough to say what they need, even if that makes for a short book.
> But let’s say you can narrow it down to one good one, and you can find the time to read it. You plunk down an absurd $30 (of which, I’m told, less than $3 goes to the author) for a bulky hardcover and you quickly discover that the author doesn’t have all that much to say. But a book is a big thing, and they had to fill it all up, so the author padded it. There are several common techniques.
> One is to repeat your point over and over, each time slightly differently. This is surprisingly popular. Writing a book on how code is law, an idea so simple it can fit in the book’s title? Just give example after example after example.
> Another is to just fill the book with unnecessary detail. Arguing that the Bush administration is incompetent? Fill your book up with citation after citation. (Readers must love being hit over the head with evidence for a claim they’re already willing to believe.)
> I have nothing against completeness, accuracy, or preciseness, but if you really want a broad audience to hear what you have to say, you’ve got to be short. Put the details, for the ten people who care about them, on your website. Then take the three pages you have left, and put them on your website too.
The author's goal is to convey an idea to the reader. He breaks it up into small overlapping chunks and gradually doles out these small overlapping chunks over the course of the book, sometimes backtracking and repeating an idea with a different example, all accompanied by a compelling narrative.
If he does his job well then the reader doesn't notice that spaced-repetition learning is happening because the supporting examples are entertaining enough to continue reading. In the worst case, the author gets the exact criticism that you are leveling.
Honestly, if you had Mathematics books written like Thinking, Fast and Slow or Freakonomics you'd have a lot more students passing calculus.[1]
So, here's a challenge for you - in your area of expertise (whatever that is), write down the chapters of a hypothetical book you would write to explain one or two foundational principles to an outsider (to that area). It's pretty hard to do. Then compare with best-selling non-fiction aimed at outsiders like Freakonomics, etc.
I did this (chapter overview thing) and realised pretty quickly that I had planned a really boring book.
[1] I read Thinking, Fast and Slow around 2011, and I read The Mom Test last year. Almost all of the sub-themes of the former is still in my memory. The only thing I remember of The Mom Test is that people will lie to you to protect your feelings.
most non fiction could be well-summarized as a lengthy blog post
History of science books thankfully stave off that final third until at least 80%. However, their final chapter or two universally manages to be a letdown. It's either wild optimistic speculation, hype for a theory that's debunked 5 years after publication, or a focus that accidentally happened to predict the course of science post-publication. The story is told in a tonally jarring manner compared to the tight narrative in the rest of the book.
My #1 suspect for this disease is a desire to connect the content of the book to real life. Such attempts miss more often than they drive the point home, even if they're factually correct.
I must admit this headline shocked me for the simple reason that... I straight up had no idea that he was so old.
Thank you, Daniel, for the way you've influenced my (and our) thinking in ways that are still impacting us today, both in work and in the rest of life. Rest in peace.
I've tried to read this book over and over again to understand what everyone is talking about but never found the insights that useful in practice. Like, what have you been able to apply these insights too? What good is it to know that we have a slow mode of thinking and a fast way? Genuine question.
When it's likely that your biased, and try to work around that (highly related to above). (Eg. When don't make critical decisions when you're sleep deprived)
How you can utilize other people lacking this ability. (eg utilize it in sales processes)
From gpt4:
The Two Systems: Kahneman introduces the concept of two distinct systems that govern our thoughts. System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control. System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
Heuristics and Biases: The book explains how the fast, intuitive thinking of System 1 leads to the use of heuristics—a kind of mental shortcut that facilitates quick judgments but can also lead to systematic biases and errors in thinking. Kahneman discusses several of these biases, such as the availability heuristic, where people judge the probability of events by how easily examples come to mind, and the anchoring effect, where people rely too heavily on the first piece of information they encounter.
Overconfidence: One of the themes of the book is the confidence people place in their own beliefs and judgments. Kahneman shows that people tend to overestimate their knowledge and their ability to predict outcomes, leading to a greater confidence in their judgments than is warranted. This overconfidence can contribute to risky decision-making and failure to consider alternative viewpoints.
Prospect Theory: Kahneman, along with Amos Tversky, developed Prospect Theory, which challenges the classical economic theory that humans are rational actors who always make decisions in their best interest. Prospect Theory suggests that people value gains and losses differently, leading to decisions that can seem illogical or irrational. It highlights the asymmetry between the perception of gains and losses, where losses are felt more acutely than gains are enjoyed.
Happiness and Well-being: The book also delves into the determinants of happiness and well-being, distinguishing between the experiencing self (which lives in the present) and the remembering self (which keeps score and makes decisions). Kahneman explores how our happiness is influenced more by how life events are remembered than by the actual experience. This leads to some counterintuitive findings, such as people being happier with experiences that end on a high note, regardless of the overall quality or duration of the experience.
(lots less controversial than Going Infinite)
It has its good parts, like elaborating on System 1 and System 2, but my favorite concept was regression to the mean. It might by obvious in some cases, but the book made me realize that it applies nearly everywhere.
The bad parts include priming (e.g. the Florida effect) that like others mentioned could not be replicated. He sometimes praises himself for even trivial observations. But my biggest grime that he dismisses Bernoulli's hypothesis in favor of his loss aversion (I still think humans apply a mix of both), while also framing loss aversion as irrational. That is, humans should always only maximize the expected outcome (in terms of money). The reasoning is that during life we will encounter a continuous stream of decisions and maximizing the expected value in each decision will (according to the law of large numbers) maximize the overall income.
It's not (always) irrational. Imagine you have a million dollars. Someone offers you a gamble of a fair coin flip between gaining 2 million dollars and paying one million dollars. With a million USD on your bank account you had a quite comfortable life, and it could get more comfortable with 3 million on your account. But if you lose you are ruined. According to Kahnemann you should take that gamble. Also consider that before the invention of money, those decisions were typically whether to hunt that mammoth or something less aggressive.
The German version of "Who Wants to Be a Millionaire" has a particularity: Your win jumps from € 125.000 to € 500.000 at the 14th question (A consequence of conversion from Deutsche Mark to Euros). Assume you have no idea what the answer is. According to Kahneman you should always pick one at random. If you pick right you get € 500.000. If you pick wrong you will still win € 32.000 or € 500 if you took the 4th lifeline like most contestants do. This makes an expected win of 3/4500 + 1/4500.000 = € 125.375, compared to € 125.000 when you don't answer. Would you do it?
Leadership, pride, excellence, empathy, and fairness must not fail into the decay of jingoist buzzwords and remain values with intent and actions that remain unwavering.
The greatest danger is dishonesty when words stop having ordinary meanings, when people stop talking to each other, or when they're a lack of agreement on the obvious intersection of a shared reality.
Which surely is one of the best things you can say about a scientist.
One half-joking comment he made about science in the real world vs some idealized notion of it has always stuck with me. In a discussion about whether the results of some paper conflicted with some model or theory of cognition, he mused that scientific progress in psychology (and other non-hard sciences) was really about embarrassing rivals with competing models. No high-level model was ever stated precisely enough to rule out some particular finding; you could always tweak your theory a little to accommodate it. It's just that at some point, you might be too embarrassed to do so.
"As soon as you present a problem to me, I have some ready-made answer. Those ready-made answers get in the way of clear thinking, and we can’t help but have them." – Daniel Kahneman
His work got torn to shreds with science, what did you expect him to do?
It really should be embarrassing rather than acceptable. It doesnt matter how honest you are AFTER you are caught.
Between the replication crisis and his name on some bad papers, the guy seriously didn't care about correctness as much as interesting-faux-science.
Most of the "underpowered studies" are in the priming-related chapter, called "The Assoviative Machine". The rest of the book is still worth a careful read.
(I had to make this same correction here several years ago. I didn't look up the comment to link it here.)
https://kahneman.scholar.princeton.edu/
https://www.washingtonpost.com/obituaries/2024/03/27/daniel-... | https://archive.today/tZY2w ("The Washington Post: Daniel Kahneman, Nobel-winning economist, dies at 90")
https://www.bloomberg.com/news/articles/2024-03-27/daniel-ka... | https://archive.today/MpDes ("Bloomberg: Daniel Kahneman, Psychologist Who Upended Economics, Dies at 90")
I'd thought that this was reflected in some university departmental organisation, with M.I.T. being the one that came to mind. Despite there being a behavioural economics section there, though, so far as I'm aware Economics remains its own department.
Kahneman's training and primary focus were both in psychology, but he was awarded the somewhat problematic Nobel Memorial Prize in Economic Sciences. Multi-discipliniarity is in fact A Thing.
Princeton bio:
Daniel Kahneman is Professor of Psychology and Public Affairs Emeritus at the Princeton School of Public and International Affairs.... He has been the recipient of many awards, among them the Distinguished Scientific Contribution Award of the American Psychological Association (1982) and the Grawemeyer Prize (2002), both jointly with Amos Tversky, the Warren Medal of the Society of Experimental Psychologists (1995), the Hilgard Award for Career Contributions to General Psychology (1995), the Nobel Prize in Economic Sciences (2002), the Lifetime Contribution Award of the American Psychological Association (2007), and the Presidential Medal of Freedom (2013).
Some interesting talks with Daniel Kahneman
- https://www.edge.org/adversarial-collaboration-daniel-kahnem...
- https://replicationindex.com/2017/02/02/reconstruction-of-a-... Kahneman himself reponds in the comment sections to a very critical piece about his work.
Convince everyone to use a different model and it will work.
>> I [Kahneman] accept the basic conclusions of this blog. To be clear, I do so (1) without expressing an opinion about the statistical techniques it employed and (2) without stating an opinion about the validity and replicability of the individual studies I cited.
What the blog gets absolutely right is that I placed too much faith in underpowered studies. As pointed out in the blog, and earlier by Andrew Gelman, there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. We also cited Overall (1969) for showing “that the prevalence of studies deficient in statistical power is not only wasteful but actually pernicious: it results in a large proportion of invalid rejections of the null hypothesis among published results.” Our article was written in 1969 and published in 1971, but I failed to internalize its message.
My position when I wrote “Thinking, Fast and Slow” was that if a large body of evidence published in reputable journals supports an initially implausible conclusion, then scientific norms require us to believe that conclusion. Implausibility is not sufficient to justify disbelief, and belief in well-supported scientific conclusions is not optional. This position still seems reasonable to me – it is why I think people should believe in climate change. But the argument only holds when all relevant results are published.
I knew, of course, that the results of priming studies were based on small samples, that the effect sizes were perhaps implausibly large, and that no single study was conclusive on its own. What impressed me was the unanimity and coherence of the results reported by many laboratories. I concluded that priming effects are easy for skilled experimenters to induce, and that they are robust. However, I now understand that my reasoning was flawed and that I should have known better. Unanimity of underpowered studies provides compelling evidence for the existence of a severe file-drawer problem (and/or p-hacking). The argument is inescapable: Studies that are underpowered for the detection of plausible effects must occasionally return non-significant results even when the research hypothesis is true – the absence of these results is evidence that something is amiss in the published record. Furthermore, the existence of a substantial file-drawer effect undermines the two main tools that psychologists use to accumulate evidence for a broad hypotheses: meta-analysis and conceptual replication. Clearly, the experimental evidence for the ideas I presented in that chapter was significantly weaker than I believed when I wrote it. This was simply an error: I knew all I needed to know to moderate my enthusiasm for the surprising and elegant findings that I cited, but I did not think it through. When questions were later raised about the robustness of priming results I hoped that the authors of this research would rally to bolster their case by stronger evidence, but this did not happen.
I still believe that actions can be primed, sometimes even by stimuli of which the person is unaware. There is adequate evidence for all the building blocks: semantic priming, significant processing of stimuli that are not consciously perceived, and ideo-motor activation. I see no reason to draw a sharp line between the priming of thoughts and the priming of actions. A case can therefore be made for priming on this indirect evidence. But I have changed my views about the size of behavioral priming effects – they cannot be as large and as robust as my chapter suggested.
I am still attached to every study that I cited, and have not unbelieved them, to use Daniel Gilbert’s phrase. I would be happy to see each of them replicated in a large sample. The lesson I have learned, however, is that authors who review a field should be wary of using memorable results of underpowered studies as evidence for their claims.
Two things bothered me about it though - firstly, it landed shortly before the reproducibility issues of such research became more widely known.
Secondly - towards the end of the book, it espouses the idea that using some methods of psychlogical and behavoural manipulation is at worst a net neutral, especially if there was nothing to see of the manipluation in question. After all, who can argue against organ donation being opt-out by default, or similar?
To me, this is like a magician claiming that there was no sleight of hand, as we were free to look wherever we liked during their performance. Denying the presence and capabailities of tools of manipulation is, in my opinion, incredibly dangerous, and the worst of its outcomes has been very publicly played out in recent years.
I personally find that telling people exactly what I intend to do makes it more effective rather than less. But in a field where we can change people's behavior by making a button orange instead of blue or presenting a form in one page vs three, I find it impossible to pretend that one of those is a neutral choice.
Instead, I focus on what it is we are maximizing for, and how people feel about the experience. I push my companies to choose patterns that help people feel secure & in control, leading to predictable outcomes that align with what they actually expressed wanting. It means we are collaborating with our users, even though we could have used those same techniques to make them feel more anxious, spend more money than they intended, or buy things they didn't actually need.
To a large extent, it's still dogmatic and prescriptivist, but unorthodox opinions (not just limited to behavioral economics) are more accepted & considered following Kahneman's input.
That was when I was studying it!
So much rubbish "economics is a science because it uses maths" is one favourite of mine.
I did over a decade of study on economics and finance and nobody, even once, mentioned Karl Marx, arguably the most influential economist in the last two hundred years.
It was very prone to fetishes. "Price mechanism" was one I recall. Every problem in society had to be shoe horned into a market so the "price mechanism" would get to work.
https://replicationindex.com/2020/12/30/a-meta-scientific-pe...
Same thing is happening w/ a lot of the work of Dan Ariely, but I think his situation is much worse.
And if I recall correctly he addresses the replication issues from Thinking Fast And Slow and discusses more recent research that disproves or adds nuance on the older studies. I think it’s also more practically useful and applicable to everyday life. Where TFS gives you a “these are interesting facts about life” vibe, Noise is more “here’s the problem and this is what you can do about it” style.
That aside, I don't doubt that Kahneman was a brilliant mind, and I'm saddened by his passing. RIP.
"It must have been late 1941 or early 1942. Jews were required to wear the Star of David and to obey a 6 p.m. curfew. I had gone to play with a Christian friend and had stayed too late. I turned my brown sweater inside out to walk the few blocks home. As I was walking down an empty street, I saw a German soldier approaching. He was wearing the black uniform that I had been told to fear more than others – the one worn by specially recruited SS soldiers. As I came closer to him, trying to walk fast, I noticed that he was looking at me intently. Then he beckoned me over, picked me up, and hugged me. I was terrified that he would notice the star inside my sweater. He was speaking to me with great emotion, in German. When he put me down, he opened his wallet, showed me a picture of a boy, and gave me some money. I went home more certain than ever that my mother was right: people were endlessly complicated and interesting."
One of the few other books that's changed my thinking about my thinking in similar ways is Annie Murphy Paul's "The Extended Mind" - https://bookshop.org/p/books/the-extended-mind-the-power-of-... . It's hard to put anything at the level of Thinking Fast and Slow, but it felt like reading a sequel to that book.
RIP Daniel Kahneman.
Two things bothered me about it though - firstly, it landed shortly before the reproducibility issues of such research became more widely known.
Secondly - towards the end of the book, it espouses the idea that using some methods of psychlogical and behavoural manipulation is at worst a net neutral, especially if there was nothing to see of the manipluation in question. After all, who can argue against organ donation being opt-out by default, or similar?
To me, this is like a magician claiming that there was no sleight of hand, as we were free to look wherever we liked during their performance. Denying the presence and capabailities of tools of manipulation is, in my opinion, incredibly dangerous, and the worst of its outcomes has been very publicly played out in recent years.
I think you may be objecting to the idea of manipulation here rather than his point. Influence is not necessarily bad, if a dentist notices some poster which causes his patients to floss more shouldn't he keep it up?
Suggesting all manipulation is bad implies we shouldn't do public health education etc if it happens to be effective.
After all, floss is a single-use plastic, generally made of PTFE, the production of which requires all sorts of nasty forever chemicals.
https://conversationswithtyler.com/episodes/daniel-kahneman/
Now that "soft paternalism" has been so successful, the same policymakers are pivoting into hard paternalism .
I learned a lot from Thinking Fast and Slow, but it's also a cynical book. In the same vein as Skinner's behavioralist view of people.
Principles must always come first.
Isn't that a bad question to ask, it suggests there are only two possible outcomes, wouldn't a better question include a third option of "not a bank teller and may or may not be a active feminist"?
I'd imagine there's enough stuff Kahneman identified with biases that have held up and don't involve artificial questions like this designed to trick the respondents whose real world applicability seem questionable at best...
further, in the supplied example, I'd argue that the prior probability of Linda being a feminist (based on her being an activist/etc.) is probably higher than her not being a feminist so, in a sense the respondents got it right (i.e., in that population, I'd argue there are more women who are bank tellers and feminists than just bank tellers)...
- It was also touched on in the original paper that Tversky and Kahneman put out https://psycnet.apa.org/record/1984-03110-001
If anyone has figured out how to do it using one's phone, please share. There used to be an App on Google Play store but it doesn't work on more recent versions of Android. I created a Spreadsheet based random 4-digit number prompter, which isn't bad, but I'd like better ideas if anyone has any.
Kahneman was one of those people where I was just waiting to have a problem tough enough that I'd have a good reason to email him with a question, whether or not I'd get a response. I guess no longer.
Sam Harris jokes, "I have met these people". Daniel replies, "We have met them and we see them in mirror" [0]
[0] 17:25 @ https://www.samharris.org/podcasts/making-sense-episodes/150...
I understand that sometimes you need assumptions to make the math work, but the fact that it took so long for behavioral economics and bounded rationality to be recognized is crazy. Just because the math is convenient doesn't mean people work that way at all.
I say this as someone who has taken a lot of econ classes, so I understand its value, but it is still very much a set of principles and ways of thinking about problems involving people, rather than something as exact as it's made out to be.
I got slightly off topic here, but seeing as how Daniel got the nobel prize in 2011 (pretty recent) and the work occurred in the 70s, it made me think again about how young the field is.
Really sad news...
[0] see e.g. https://www.nytimes.com/2009/09/06/magazine/06Economic-t.htm...
My dream is to one day have the caliber of insights this man had, along with his ability to express them so clearly and persuasively.
dang, does this deserve a black bar?
Kahneman is unequivocally the person I would call my hero, today I am sad to see him leave us. I hope to honor his memory by... I guess, recognizing just how wrong I am, on a regular basis.
In past centuries, this was a fairly common business model, but I understand the concept sounds pretty jarring to modern ears.
Professor Kahneman, who was long associated with Princeton University and lived in Manhattan, employed his training as a psychologist to advance what came to be called behavioral economics. The work, done largely in the 1970s, led to a rethinking of issues as far-flung as medical malpractice, international political negotiations and the evaluation of baseball talent, all of which he analyzed, mostly in collaboration with Amos Tversky, a Stanford cognitive psychologist who did groundbreaking work on human judgment and decision-making.
"Judgment under Uncertainty: Heuristics and Biases" is the paper that made them famous, and it's still a damn good read:
https://www2.psych.ubc.ca/~schaller/Psyc590Readings/TverskyK...
Each of us sees the world through our own sets of biases. None of us is immune. Any of us that can see clearly enough to move humanity’s body of knowledge forward even a smidgen is a rarity and a treasure. Even among that group this man (and his collaborators) did more than most. I don’t believe _I_ will be counted among those. I can’t fault Kahneman for making some mistakes. Finding and fixing those is part of the process, and requires others with a different set of perspectives and biases. At a future date, weighed against his contributions, I believe they will appear relatively small.
* I had read Eugene Koonin's "The Logic of Chance" and was then recommended Taleb's books for a more thorough perspective on probability, to apply to Koonin's work.