You distinctly remember the times you had an outlier belief which you ignored to your detriment, but tend to forget all the foolish-seeming ideas you had which were indeed very much foolish when taking stock of the past.
I think the opposite advice might be even more useful for many, many people: be more scared of overconfidence, because you know less than you think. Research on cognitive biases and the underperformance of "experts" in many fields tends to support this view.
* Avoid groupthink. If everyone thinks it's an outlier then it probably isn't. I won't touch anything more than two people tell me to look at.
* Sometimes you do know something the market doesn't. Sometimes you can maintain a position longer than the market can remain irrational. This is something that happens every day.
* Ask why it's different. Outliers have distinguishing characteristics from their peers.
* Look as the population the potential outlier belongs to (businesses, athletes, commodity futures, etc). Understand key statistics such as variance, confidence intervals, range, mean, and mode. This helps answer basic questions like "To be successful will this need to be in the top 20% or top 0.002%", "If this fails completely, what will it cost me" and "What is the range of the 95% most likely outcomes." A lot of people will avoid something that feels more risky than it actually is.
However, it also means that saying things like "take a risk, do the work, put yourself out there" does not make sense for most people because they lack an unfair advantage.
"Here is how I did X" stories are almost always useless for 99% of people because they gloss over unfair advantages like e.g. went to Stanford/Harvard/other, have VC connections, worked at X company, etc. Most people cannot replicate the initial state of these startup stories, let alone the end result.
Same goes for working at Google as your only job then writing a definitive book about how SWE is done, without further research on what else is out there.
It's sort of like all those Amazon reviews that used to say "In exchange for an honest and unbiased review, I received the product". Before Amazon auto-removed them, they were a strong negative quality signal.
I would prefer if we preserve high comment SNR by encouraging people to use standard comms patterns so that my automatic comment bucketing will allow me to skip non-useful comments, e.g. correlation is not causation, survivorship bias, etc.
I highly recommend Ben's post dedicated to outliers where he explores this a bit more: https://www.benkuhn.net/outliers/.
> Maybe society would be better off if we stepped back from the always-on, up-or-out culture and accepted that most people are average and (rightly!) value consistency more than upside
Based on how I'm interpreting this you have it backwards. I don't think outlier results with high upside generally benefit from being "always-on" or "up-or-out culture" because that is too far on the exploit side of things to ever find outliers!
My belief is that more slack and time to explore are more likely to produce outliers.
Ben mentions his writing on outliers [1] and how "low-info heuristics tell you that outliers can’t exist", which is completely true but I want to add another perspective to that.
As mentioned in the outliers piece, you want to rule things in not out. The average case is good for ruling things out, but terrible for ruling things in. Specifics to your scenario will always be the reason to rule in on things that look bad in the average case.
In turn, it can be useful to adjust the very one-sided public estimates based your own nuanced reasoning. This could change the status-quo probability from 99% to, say, 60%.
I appreciate the casual trashing of Theorem's trustworthiness as someone who has been on the receiving end of the lack of it.
> (...) efficient market hypothesis (...)
Classic case of "just read a new idea and I'll apply it everywhere".
In addition of everything else wrong with this post: No, you can't reach reliable conclusions about startups by reaching to efficient market hypothesis.