Thinking critically includes, most of all, finding value - you need to think critically (and skeptically) to avoid assigning value to things that don't have it, but you must find value. The goal is to build knowledge - just like the study author needs to find knowledge among flawed data, you must find knowledge among flawed studies - and they are all flawed, of course.
Focusing on the flaws and trying to shoot down everything is just craven recreation.
This money and time is taken directly away from funding other, potentially more worthy or more likely to be correct studies.
There is no point of looking at every (flawed) study in the most positive way, unless you have unlimited time and money to pursue every avenue of research.
Often (not always), the studies that are most heavily promoted among the news and in business or politics are really not the best research and other, less visible but more solid research gets ignored in favor of whats popular or what has had good marketing.
This is very frustrating for people doing solid good research, because every so often someone else will come along with wild, exaggerated claims and very little data to back it up, and then gets funding for it.
It takes literal years away from good science just because someone markets and speaks well.
Which is fine in business, but in science this is not something "the market" can or will correct for well, simply because the timespans are so long.
This line epitomizes the nonsense in the discussion. I didn't say every study, you can't know it's flawed without seriously examining it, and I didn't say in the most positive way at all.
By using these exaggerations, you damage any serious discussion - you give people nothing to respond to except your emotional state.
What I said was, the point is to build knowledge, and so the way to examine research is to find the valuable knowledge - which includes evaluating the accuracy, etc. of that knowledge. There's no other point to it - we're not awarding tenure here, so there's point in keeping some overall score. We just want to learn what we can.
My guess is if you raise examples of "good science" the HN peanut gallery will jump in to point out the flaws in that science, too.
This isn’t critical thinking.
This is toxic positivity.
It’s okay to admit that some studies don’t have value to add. If you don’t accept this, you’re going to be tricked by a lot of people trying to get your attention with bad data.
Being able (and willing!) to filter out bad sources, even when they say something you want to hear, is a critically important skill. If you force yourself and others to find something positive about everything then you’re a dream come true to purveyors of low quality or even deliberate misinfo.
> some studies
It's almost every study on HN, not some studies, which you'd understand if you read my comment.
I've made this mistake time and time again, most recently with vitamin D association studies, and I'm grateful to all the people who urged everyone else to take a wait-and-see approach.
No, its a valuable job to find flaws because its much easier to fix and work on known flaws than to stumble in the dark.
Removing flaws and problems is one of the easiest ways to add value.
The real significance is that things like sample size, to pick a common example here, is easy to understand in a theoretical way and so people apply it to the actual (not theoretical) practice of real research, which they don't understand the practicalities of, and also they overemphasize it because that's pretty much all they understand.
The first thing they look at in a paper is sample size - and hey, now sometimes they have something to 'contribute'! It's just reinforcing the same misunderstandings in others.
It sucks, a little, to have nothing to contribute, but it's a great opportunity to learn from people who do know.