> Our sophistication increases with every choice we make, and so do our standards. Standards compete just as theories compete and we choose the standards most appropriate to the historical situation in which the choice occurs. [...] It forces our mind to make imaginative choices and thus makes it grow.
He often gets lumped together with continental thinkers and post-modernists like Foucault that he has nothing to do with.
Against Method is a very short and simple book and I suspect that if you'd get a physicist, a chemist, a linguist, an engineer, a mathematician, an economist and so on to read it, they'd all be extremely underwhelmed and would just say "yeah, sounds about right, what's all the fuss about and why is this even considered interesting or provocative?"
I also don't understand the other comments who say it's full of sophistry. There's a couple of "discussion" chapters at the end that maybe you will like or maybe you won't, but the bulk of the book is a thorough analysis of famous theories and experiments in physics such as those of Galileo, which he handles with much more attention to detail than the idealized versions you get from Popper and the like. He has a completely fascinating account of why the church didn't like Galileo, which had as much to do with his orneriness as with his science.
I think it could have been written in a less provocative and eccentric way. Feyerabend had a certain rhetorical style that tends to get some folks unnecessarily riled up. Rewrite the core argument in a plain and simple way and I agree most working scientists wouldn’t have a whole lot to object to (remember Feyerabend was writing against other Philosophers). Some working scientists have been inspired by the book, though. Here’s a great quote from physicist Lee Smolin:
> What Feyerabend's book said to me was: Look, kid, stop dreaming! Science is not philosophers sitting in clouds. It is a human activity, as complex and problematic as any other. There is no single method to science and no single criterion for who is a good scientist. Good science is whatever works at a particular moment of history to advance our knowledge. And don't bother me with how to define progress — define it any way you like and this is still true.
> From Feyerabend, I learned that progress sometimes requires deep philosophical thinking, but most often it does not. It is mostly furthered by opportunistic people who cut corners, exaggerating what they know and have accomplished. Galileo was one of these; many of his arguments were wrong, and his opponents — the well-educated, philosophically reflective Jesuit astronomers of the time — easily punched holes in his thinking. Nevertheless, he was right and they were wrong.
> What I also learned from Feyerabend is that no a priori argument can tell us what will work in all circumstances. What works to advance science at one moment will be wrong at another. And I learned one more thing from his stories of Galileo: You have to fight for what you believe.
In junior high school I remember getting taught about the scientific method, particularly the use of controls. I wrote 5 papers and didn’t use a control in any of them (it wouldn’t have been appropriate.).
Even in cases where people obviously should use controls, such as clinical trials, they frequently don’t. There was that paper where they measured vitamin C levels of COVID-19 patients but didn’t compare it to a baseline of people who were not sick, which is problematic in many ways.
When they do meta-analysis by the Cochrane methodology they usually throw out at least half of the studies at the beginning because of glaring methodological flaws. Practically it is not much better than anarchy in terms of what gets funded and published.
- Cherry-picking of data.
- Flawed or missing controls.
- Lack of replication, and in the few cases where it's attempted, failure to replicate.
- Non-publication of failures.
- Publish-or-perish providing huge incentives to publish junk.
- Peer review being an old boys club that enforces the party line.
- All funding coming from few sources that tacitly use their funding power to fund only those that toe the party line.
- So much basic science having been done by now that the remaining science to do is generally expensive to do, thus inviting the above funding / control problem.
- Dogmatism.
- Media attention.
These are the problems that plague science today. Some of these have been there for a long time, like dogmatism. There are people alive today who were taught that the continents don't move, and that noticing that South America and Africa fit together and concluding that they must have moved is nonsense of the highest order. There are people alive today whose treatment for Polio was not physical therapy but immobilization. The list of dogma, old and new, is long. The malign ways in which some lords of science fiefdoms defend their dogmas have not gone away in spite of Popper's method.
This is not true in my experience (I have designed and run pharmaceutical clinical trials in humans). Can you give some examples?
Some treatment trials do compare the existing standard of care vs the new one rather than placebo against proposed treatment, but those are certainly controlled studies too.
I do know of cases where a control is impossible, such as some surgical procedures, though even then sometimes sham surgery is performed (this is controversial). Those are singly-blinded controls as the surgeon knows.
There is a big difference between "often not practiced" and "not really practiced [ever]". The former is true, the latter is not.
Over the years I grew uncomfortable with the fact that I could not track down any of his references, e.g., to Aristotle. I don't remember the specific cases any more, but still it was a bit unnerving.
I think this is part and parcel of the same “cargo-cult” appropriation of “science” that results in what you’re saying, and represents a real threat to science.
When really the intention comes from a different place and requires a slightly more open mind to grok
Imagine spending your whole life working under method X. You study its precepts, gain practical knowledge, go out into the field and test your assumptions. Your desire is to end human suffering using method X, and build a better world where good triumphs over evil and everyone can pursue their bliss.
But you must sacrifice in order for method X to work. The precepts demand you honor the gods. If you are an Aztec, you end up with pyramids covered in blood and tens of thousands dead, but are still woefully unprepared for disease and Europeans, your people suffer immensely and are virtually extinguished, and you die realizing your method was wrong and your sacrifices were in vain.
You can avoid that with a meta method that helps you select proper methods.
The amount of human suffering that can be avoided if people are able to distinguish the effectiveness of method X from method Y is extreme. That is where the hard earned victories of modernity come from. Science is the preeminent comparative meta method that identifies which methods are most effective at alleviating human suffering and people are right to uphold it (and to distinguish it from scientism/confusion with scientific bureaucracy)
There is no universal, rule based, propositional method of betterment, but to give up on the idea of any method being objectively better than another is to give up on the idea of meta negotiation and the pursuit of universal peace and prosperity. The pursuit is worthwhile, even if it may not be fully achieved in billions or trillions of years and may be more of an art than a system of computation.
The astronomers hate being lumped in with the astrologers, and with good reason. Feyerabend points out that it's maddeningly hard to be rigorous about exactly what that reason is. When the astronomers claim "the scientific method", this book shows them that they're wrong, but without suggesting a really good alternative.
Giving $10 billion to the astrologers (the price of the JWST) is not an option. It would be nice to have a method to say why. Ultimately, that's the real controversy: not epistemology, but money.
He has a lot to do with them. One of the themes in Derrida’s “The Truth in Painting” revolves around the maxim, “there is no passe-partout (a master key that opens all locks).” Foucault’s “Madness and Civilization” is partially about the lack of a single axis of “Reason”, which would presumably be the antonym of a similarly univocal “Madness”.
See this review for a decent overview of it:
There's 'science', 'morality', etc. Not all decision are scientific, even if they require scientific knowledge as a factor. Here he aligns with Focualt when discussing power and the urge to use science, or Hayekian scientism, to say there is one true way and the decision should be X.
Einstein ; Gödel ; Popper => Kuhn => Feyerabend (among others) have basically wrecked the big modern positivist project (not that I blame them).
Quite related :
https://medium.com/s/story/peterson-historian-aide-m%C3%A9mo...
And :
https://ndpr.nd.edu/news/french-theory-how-foucault-derrida-...
Later in life, after becoming a software engineer, it occurred to me that point of view has some resemblance to managers trying to determine whether a software engineer or a team of engineers are doing good work. If you apply a method too rigorously, you'll end up rewarding the wrong people.
It's been ages since I read these philosophers but in my mind Feyerabend's position sort of boiled down to 'at the forefront of any specialization only the experts are able to judge which investigations are worth pursuing further'. With the corollary that experts sometimes disagree among themselves.
In the field of software engineering I've encountered several cases where new engineers are onboarded and they promptly decide that the codebase is unmaintainable and should be rewritten from scratch. I usually don't give up on legacy code so easily, but there was one project where I did genuinely held the opinion that rewriting it would have been more efficient than refactoring. It occurred to me, though, that when a software engineer says a particular piece of codebase is crap, there usually is no good way for outsiders to tell whether that's true or not.
Incidentally, Feyerabend's Against Method originated out of a challenge by Lakatos to copublish a book in which they debate various ideas. That's a useful thing to keep in mind when reading Against Method. Later someone did publish a book titled For And Against Method [1], in which writings of both Lakatos and Feyerabend are juxtaposed.
[1] https://press.uchicago.edu/ucp/books/book/chicago/F/bo362971...
The common metaphor given for Occam's razor is a field with some random dots plotted. Those points are "evidence" and drawing a shape around them is a "theory" or hypothesis. Then a shape that encapsulates those dots is said to be the most preferred theory when compared to something like a rabbit or some other arbitrary shape
But there is an inherent assumption there about what the plane looks like. It's entirely possible that the geometry on which those points of evidence lie actually lends itself to where drawing a rabbit around all those points actually IS the "simplest" assumption
The are known knowns, known unknowns, and unknown unknowns. But there's a fourth category: ideology. The unknown knowns
In my view, it's this 4th category that ultimately dooms science. Science is ultimately cultural and there's no way around that. Our institutional science is always looking at analyzing outwardly: gathering more and more data; but just as important is analyzing inwardly. Being self-critical about our invisible assumptions.
We can never fully absolve ourselves of unknown knowns, but I do believe in a "more perfect" mission. One in which we always accept we're imperfect but working towards a closer vision. But to work towards this, we need to not only analyze the dots, but also the geometries on which we place those dots
Procedure is created as the percentage of high functioning workers decreases relative to the amount of work output necessary for the system to survive.
These procedures are for sure stifling. I recently read “From Atomos to Atom” and one of the things that stood out to me was the approach most philosophers who made substantial progress on the atom took: everything is false. To make progress, they started from a position of assuming everything humans knew about this domain was incorrect; they then systematically proved _to themselves_ why each step in human thinking was correct.
I’m starting to wonder if there are two distinct concepts that we’ve conflated in the term “Science.”
One science refers to humanity’s collective constructs, the things we catalogue and teach and reconcile the world with in mass. This is a deeply social philosophy that is based on trust and not personal rigor. The scale of our collective constructs is too massive for any one person to tear apart and prove to themselves, so we substitute trust for rigor - we trust that someone else has been rigorous in generating these constructs.
The other science refers to practicing the generation of those constructs. This is a deeply personal philosophy in which there is no trust. It dictates our personal relationship and understanding of the world we inhabit. Here, science is something we personally practice in developing that understanding. We trust nothing, validate everything, and build up our own understanding in the domain.
With both, the defining trait is that reality is the final arbiter of truth.
To tie it back into business org and startup culture, not everyone in science is working towards paradigm shifts. The processes we have in place for indexing, compiling, and reconciling the constructs of science are likely sufficient. But they’re likely insufficient for generating a paradigm shift.
That being said, I find it unlikely that you’d have to tell someone working towards a paradigm shift that they should shun procedure. Many seem to share the trait of insatiable curiosity, where they’re going to build a construct against reality regardless of protocol.
I would put it slightly differently, and I honestly don't know if I'm being kinder or not. I'd say that in this book, Feyerabend is being a troll. He's out to get a reaction out of you more than to argue in great faith. In my view it's actually to the detriment of his point.
I'm still happy I read it, but I think it's one of those finicky, "meso-scale" ideas that's useful, but doesn't apply at very small or very large scales. It's also interesting that it came out a good decade after moral particularism came out, and it feels to me that his principle is "simply" methodological particularism.
It’s mostly a kind of applied epistemology. It also asks the question “how do we move past postmodernism?”. It accepts that tools like the scientific method have obvious limits and are not foolproof recipes for knowledge. This is what Feyerabend argues, and I pretty much agree with him.
However, it rejects the postmodern idea (post-Feyerabend) that this makes rationality useless or wrong. The idea that all truth is subjective or that truth is not a useful concept. Instead he argues for embedding the tools of rationality into a larger framework that he calls “meta-rationality”.
I think there is not really anything “new” in this analysis — he is in some ways just describing how applied rationality already works in practice. I have nonetheless personally found the ideas very clarifying.
You weren't kidding. I managed to find a part of "Analyses of Theories and Methods of Physics and Psychology (1970)" where I think his first iteration of Against Method was published as a paper (113 pages).
https://pixeldrain.com/u/mRL52iYs (50MB, PDF, with preview)
https://gofile.io/d/rDASZh (50MB, PDF, with preview)
https://1fichier.com/?2i8f0l8ns1gi3v6yf7ea (50MB, PDF, no preview)
Unstructured observations -> hypotheses -> structured observations (experiments) -> confirmed hypotheses.
I think that unstructured observations of new phenomenon doesn't get enough credit in general, thought some fields seem to be all phenomenology and little theorizing. But in most it's hard to write grants for unstructured observation of a phenomenon and you have to pretend to be doing some specific experiment to get the experience necessary to be putting forward real hypotheses.
For example, Lakatos isn't satisfied with "anything goes" because it fails to consider the political and social consequences of being unable to recognize science from pseudoscience. For Lakatos, demarcation is necessary to maintain a "standard of objective honesty", and avoid falling into an "intellectual decay".
Overall, Lakatos is much less provocative than Feyerabend, but is equally invested in picking apart the historical nuances of scientific progress brought into question by Popper and Kuhn.
So sometimes we can have randomized controlled trials to really understand the effectiveness of drugs, but we can't RCT climate change or the big bang, so we have to use simulations and models. That doesn't seem like "anything goes", more like a response to that one guy in my econ class who always ranted that if it wasn't backed by a RCT, all theories are BS.
The converse may be true though. If it is not backed by a theory, an RCT is BS.
Polanyi -was- a scientist, and his recognition of the influence of tacit knowledge ... 'an understanding that defies articulation' is equally essential. We all know things we cannot tell.
Useful to train for critical thinking.
Nothing should be considered sacred, it's OK to be wrong when exploring new ideas.
That we have no hard and fast rules for what is good science does not mean that anything goes. This is like saying that because it's impossible to write perfectly safe C++ programs, then we should just use raw pointers. Imprecise methods that work with a certain probability still have value.
Leaps of logic like that are popular even more generally and they leave me speachless.
Anything goes does not imply a proscription like that.
Second, let’s assume there is no method for arriving at truth. Well then how do we verify that some discovery is actually a discovery? Presumably if there was a way to do this, we could incorporate it into our method, thus undermining Feyerabend’s thesis. Well it turns out that his answer to this is to say that evaluation should consist entirely in the contribution of the discovery to happiness and flourishing. This could be useful at a general level, but seems useless when it comes to judging between theories and research paths within science.
Should programs be written in a procedural, object-oriented, or functional style? Just how much should you care about scalability when you're building your MVP? What's the best way to write tests for a program that makes API calls to servers you don't control? Rails or Next.js? Emacs or vi?
There's no tried and true method for determining the answer to any of the above. Most people will say "it depends." Depends on what? How do you tell if you made the right choice? Very often, you can only tell afterward when your company gets a bazillion users and your servers start crashing left and right. Even then, the answer is fairly subjective and boils down to whether your team, shareholders, and users are happy.