"Study failed to find a reduction" _does not equal_ "There wasn't a reduction".
From the abstract: "The risk of death from colorectal cancer was 0.28% in the invited group and 0.31% in the usual-care group (risk ratio, 0.90; 95% CI, 0.64 to 1.16)"
Look at the confidence interval. This study was both consistent with a 36% reduction in deaths or a 16% increase in deaths. It's just a really wide range. All we can say about this study is that it didn't gather enough data to identify the size of the effect, not that there wasn't an effect.
This is likely because there wasn't that many people who died of colon cancer in any of the groups. This study just didn't track enough people to provide an answer.
In a randomized controlled trial you either find a significant difference in your metrics, or you don't. There's no other option. In all cases where you don't find a significant difference, the problem is that the confidence interval is too wide for whatever difference seems to exist. Your argument here is a fallacy (i.e. "you just didn't do a big enough sample!") which is a variant of my personal favorite: "it would have worked if you'd done X, Y or Z!"
There's always another X, Y, or Z. The negative study is always too small for the people who believe in the thing it's testing. As a supporter of some intervention, the onus is therefore on you to prove your claim in a demonstrated scenario, not on everyone else to disprove it in all scenarios. Could it be true that colonoscopies have some significant benefit to mortality smaller than detectable by a 80,000-person RCT? Sure. But that doesn't make the headline wrong.
This study didn't find a mortality benefit. Arguing that there's some theoretical other study that might find a benefit isn't relevant.
This is a poor way of thinking about statistics. Whether you reject or not a sharp null hypothesis doesn't give you much information (See for example: https://www.gwern.net/Everything). Failing to reject in particular, can be compatible with a wide range of effects.
>In all cases where you don't find a significant difference, the problem is that the confidence interval is too wide for whatever difference seems to exist.
With enough data, there could totally have been a tight range around no effect or a small effect. This is not what we got here though.
Also note that other variables such as cancer risk came out significant, so while this study doesn't provide much inductive evidence around cancer death, we do get some deductive evidence based on the known link between cancer and death. Not to mention that cancer and cancer treatments are not fun even when they don't kill you.
What the trial showed was a small effect with a wide uncertainty on a big sample. We cannot distinguish this from zero.
Again, could the observed effect be significant with a larger trial? Sure. But that's always true for a negative result. The objection carries no information.
"Intention to treat" here means that you count everyone in the group that got an invitation to get a colonoscopy, regardless of whether or not they actually did it. Though this sounds counterintuitive, it's the "gold standard" because, if you don't do this, you leave yourself open to bias -- maybe the people who seek out colonoscopy have some symptom, family history or other reason that leads them to seek out treatment. Maybe the people who get a test get more treatment, and that treatment is harmful in the marginal case. Or just as importantly: maybe the people who don't have the time/inclination to do one would be better served by an alternative test.
Everyone (including GP) is fixating on the magnitude of the primary outcome and squabbling about whether or not colonoscopies help people. But I think the more interesting aspect of this study is that it shows that the genetic tests probably aren't inferior to the invasive, painful, time-consuming rectal exam. If that's true, it's great news!
Isn’t there just as much chance for bias if the treatment is voluntary? Maybe the people who are more likely to have health issues are less likely to treat them.
I think there is a valid question about how effective a colonoscopy is given that you get one, and a separate valid question about how effective telling people to get colonoscopies is. According to the article, this paper answer the second question strongly via “gold standard”, and the first question less strongly via secondary analysis.
Part of the reason it’s counterintuitive here is the title of the article is “effect of colonoscopy screening”, not “effect of a doctor’s invitation to have a colonoscopy screening”. The title more than suggests that we’re comparing the outcomes of actually having the screening to not having one.
> maybe the people who don’t have the time/inclination to do one would be better served by an alternative test.
I see what you’re saying - the overall effectiveness of our current system may be low because of a low rate of voluntary adoption of the colonoscopy is low, and even a lower accuracy screen could be more effective if more people opt in.
One problem with drawing a conclusion this way is it ignores the possibility for dramatic changes in either opinion or in procedure of colonoscopies. What if we had the tech to do the colonoscopy at home in private? Would that change the voluntary rate of testing dramatically?