Besides, are you arguing that ends justify the means if the intent behind the research is valid?
It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused. There's no point doing this, you can just search Wikipedia's edits for corrections, and start your analysis from there.
It's a specific threat model they were exploring: a malicious actor introducing vulnerability on purpose.
> Couldn't they just look at the history of security vulnerabilities in the kernel, and analyze how long it took for them to be detected?
Perhaps they could. I guess it'd involve much more work, and could've yielded zero results - after all, I don't think there are any documented examples when a vulnerability was proven to have been introduced on purpose.
> what's the point of all this subterfuge in the first place?
Control over the experimental setup, which is important for validity of research. Notice how most research involves gathering up fresh subjects and controls - scientists don't chase around the world looking for people or objects that, by chance, already did the things they're testing for. They want fresh subjects to better account for possible confounders, and hopefully make the experiment reproducible.
(Similarly, when chasing software bugs, you could analyze old crash dumps all day to try and identify a bug - and you may start with that - but you always want to eventually reproduce the bug yourself. Ultimately, "I can and did that" is always better than "looking at past data, I guess it could happen".)
> It's seems equivalent to vandalising Wikipedia to see how long it takes for someone to repair the damage you caused.
Honestly, I wouldn't object to that experiment either. It wouldn't do much harm (little additional vandalism doesn't matter on the margin, the base rate is already absurd), and could yield some social good. Part of the reason to have public research institutions is to allow researchers to do things that would be considered bad if done by random individual.
Also note that both Wikipedia and Linux kernel are essentially infrastructure now. Running research like this against them makes sense, where running the same research against a random small site / OSS project wouldn't.
Isn't this part still experimenting on people without their consent? Why does one group of maintainers get to decide that you can experiment on another group?
Does creating a vaccine justify the death of some lab animals? Probably.
Does creating supermen justify mutilating people physically and psychologically without their consent? Hell no.
You can’t just ignore the context.
That has the risk that the contacted maintainer is later accused of collaborating with saboteurs or that they consult others. Either very awful or possibly invalidates results.
> 2) Create a group of maintainers who know the experiment is going to happen, but leave a certain portion of the org out of it
Assuming the leadership agrees and won't break confidentiality, which they might if the results could make them look bad. Results would be untrustworthy or potentially increase complacency.
> 4) Interfere before any further damage is done
That was done, was it not?
> Besides, are you arguing that ends justify the means if the intent behind the research is valid?
Linux users are lucky they got off this easy.
The allegation being made on the mailing list is that some incorrect patches of theirs made it into git and even the stable trees. As there is not presently an enumeration of them, or which ones are alleged to be incorrect, I cannot state whether this is true.
But that's the claim.
edit: And looking at [1], they have a bunch of relatively tiny patches to a lot of subsystems, so depending on how narrowly gregkh means "rip it all out", this may be a big diff.
edit 2: On rereading [2], I may have been incorrectly conflating the assertion about "patches containing deliberate bugs" with "patches that have been committed". Though if they're ripping everything out anyway, it appears they aren't drawing a distinction either...
[1] - https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux...
[2] - https://lore.kernel.org/linux-nfs/YH%2F8jcoC1ffuksrf@kroah.c...
In this case, in my opinion, a small set of maintainers and linus as "management" would have to be in the know to e.g. stop a merge of such a patch once it was accepted by someone in the dark.
Kernel maintainers are volunteering their time and effort to make Linux better, not to be entertaining test subjects for the researchers.
Even if there is no ethical violation, they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.
Given the importance of the Linux kernel, there has to be a way to make contributions safer. Some people even compare it to the "water supply" and others bring in "national security".
> they are justified to be annoyed at having their time wasted, and taking measures to discourage and prevent such malicious behaviour in the future.
"Oh no, think of the effort we have to spend at defending a critical piece of software!"