UMN looks pretty shoddy - the response from the researcher saying these were automated by a tool looks like a potential lie.
Each institution, and each IRB is made up of people and a set of policies. One does not have to meaningfully misrepresent things to IRBs for them to be misunderstood. Further, exempt from IRB review and 'not human subjects research' are not actually the same thing. I've run into this problem personally - IRB declines to review the research plan because it does not meet their definition of human subjects research, however the journal will not accept the article without IRB review. Catch-22.
Further, research that involves deception is also considered a perfectly valid form of research in certain fields (e.g., Psychology). The IRB may not have responded simply because they see the complaint as invalid. Their mandate is protecting human beings from harm, not random individuals who email them from annoyance. They don't have in their framework protecting the linux kernel from harm any more than they have protecting a jet engine from harm (Sorry if that sounds callous). Someone not liking a study is not research misconduct and if the IRB determined within their processes that it isn't even human subjects research, there isn't a lot they can do here.
I suspect that this is just one of those disconnects that happens when people talk across disciplines. no misrepresentation was needed, all that was needed was for someone reviewing this, who's background is medicine and not CS, to not understand the organizational and human processes behind submitting a software 'patch'.
The follow up behavior...not great...but the start of this could be a serious of individually rational actions that combine into something problematic because they were not holistically evaluated in context.
However, given that, my interpretation of the federal common rule is that this work would indeed fit the definition of human subjects research, as it comprises an intervention, and it is about generalizable human procedures, not the software artifact.
The type of deception that is allowable in such cases is lying to participants about what it is that is being studied, such as telling people that they are taking a knowledge quiz when you are actually testing their reaction time.
Allowable deception does not include invading the space of people who did not consent to be studied under false pretenses.
It sounds pretty callous if that jet engine gets mounted on a plane that carries humans. In this hypothetical the IRB absolutely should have a hand in stopping research that has a methodology that includes sabotaging a jet engine that could be installed on a passenger airplane.
Waiving it off as an inanimate object doesn't feel like it captures the complete problem, given that there are many safety critical systems that can depend on the inanimate object.
https://en.wikipedia.org/wiki/Peter_Boghossian#Research_misc...
Do you think that the IRB was correct to make the determination they did? It does sound like a bit of a grey area
They also lied in the paper about their methodology - claiming that once their code was accepted, they told the maintainers it should not be included. In reality, several of their bad commits made it into the stable branch.
So it’s possible that this situation has nothing to do with that research, and is just another unethical thing that coincidentally comes from the same university. Or it really is a new study by the same people.
Either way, I think we should get the facts straight before the wrong people are attacked.
Is it known whether these commits were indeed bad? It is certainly worth removing them just in case, but is there any confirmation?
And also, given that the stakes are higher e.g. in medicine, and the bar is lower in biology, one often gets a pass: "You don't want to poke anyone with needles, no LSD and no cages? Why are you asking us then?" Or something to that effect. The IRBs are just not used to such "harmless" things not being justified by the research objective.
To be clear, this is unethical research.
But I read the paper, and these patches were probably automatically generated by a tool (or perhaps guided by a tool, and filled in concretely by a human): their analyses boil down to a very simple LLVM pass that just checks for pointer dereferences and inserts calls to functions that are identified as performing frees/deallocations before those dereferences. Page 9 and onwards of the paper[1] explains it in reasonable detail.
[1]: https://github.com/QiushiWu/QiushiWu.github.io/blob/main/pap...
Could they have submitted patches to fix the problems based on same tooling or was that not possible (I am not close to kernel development flow)?
Depends on what you mean: they knew exactly what they were patching, so they could easily have submitted inverse patches. On the other hand, the obverse research problem (patching existing UAFs rather than inserting new ones) is currently unsolved in the general case.