Even worse: They bragged about it, then sent a new wave of buggy patches to see if the "test subjects" fall for it once again, and then tried to push the blame on the kernel maintainers for being "intimidating to newbies".
This is thinly veiled and potentially dangerous bullying.
Which itself could be the basis of a follow up research paper. The first one was about surreptitiously slipping vulnerabilities into the kernel code.
There's nothing surreptitious about their current behavior. They're now known bad actors attempting to get patches approved. First nonchalantly, and after getting called out and rejected they framed it as an attempt at bullying by the maintainers.
If patches end up getting approved, everything about the situation is ripe for another paper. The initial rejection, attempting to frame it as bullying by the maintainers (which ironically, is thinly veiled bullying itself), impact of public pressure (which currently seems to be in the maintainers' favor, but the public is fickle and could turn on a dime).
Hell, even if the attempt isn't successful you could probably turn it into another paper anyway. Wouldn't be as splashy, but would still be an interesting meta-analysis of techniques bad actors can use to exploit the human nature of the open source process.
Seems more like low grade journalism to me.
Fortunately, the episode also suggests that the kernel-development immune-system is fully-operational.
I'm not saying people or institutions cant change. But the burden of proof is on them now to show that they did. A good first step would be to acknowledge that there IS a good reason for doubt, and certainly not whine about 'preconceived bias'.
It is not the job of the kernel maintainers to justify the teams new nonsense patches. If the team has stopped being bullshit, they should defend the merit of their own patches. They have failed to do so, and instead tried to deflect with recriminations, and now they are banned.
As soon as I read that all sympathy for this clown was out the window. He knows exactly what he's doing.
I don't have data to back this up, but I've been around a while and I can tell you papers are rejected from conferences for ethics violations. My personal observation is that infosec/cybersecurity academia has been steadily moving to higher ethical standards in research. That doesn't mean that all academics follow this trend, but that unethical research is more likely to get your paper rejected from conferences.
Submitting bugs to an open source project is the sort of stunt hackers would have done in 1990 and then presented at a defcon talk.
IEEE seems to have no problem with this paper though.
>>> On the Feasibility of Stealthily Introducing Vulnerabilities in Open-Source Software via Hypocrite Commits Qiushi Wu, and Kangjie Lu. To appear in Proceedings of the 42nd IEEE Symposium on Security and Privacy (Oakland'21). Virtual conference, May 2021.
> We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch. At the same time, we point out the correct fixing of the bug and provide our correct patch. In all the three cases, maintainers explicitly acknowledged and confirmed to not move forward with the incorrect patches. All the UAF-introducing patches stayed only in the email exchanges, without even becoming a Git commit in Linux branches. Therefore, we ensured that none of our introduced UAF bugs was ever merged into any branch of the Linux kernel, and none of the Linux users would be affected.
It seems that the research in this paper has been done properly.
EDIT: since several comments come to the same point, I paste here an observation.
They answer to these objections as well. Same section:
> Honoring maintainer efforts. The OSS communities are understaffed, and maintainers are mainly volunteers. We respect OSS volunteers and honor their efforts. Unfortunately, this experiment will take certain time of maintainers in reviewing the patches. To minimize the efforts, (1) we make the minor patches as simple as possible (all of the three patches are less than 5 lines of code changes); (2) we find three real minor issues (i.e., missing an error message, a memory leak, and a refcount bug), and our patches will ultimately contribute to fixing them.
And, coming to ethics:
> The IRB of University of Minnesota reviewed the procedures of the experiment and determined that this is not human research. We obtained a formal IRB-exempt letter.
IEEE is just the publishing organisation and doesn't review research. That's handled by the program committee that each IEEE conference has. These committees consist of several dozen researchers from various institutions that review each paper submission. A typical paper is reviewed by 2-5 people and the idea is that these reviewers can catch ethical problems. As you may expect, there's wide variance in how well this works.
While problematic research still slips through the cracks, the field as a whole is getting more sensitive to ethical issues. Part of the problem is that we don't yet have well-defined processes and expectations for how to deal with these issues. People often expect IRBs to make a judgement call on ethics but many (if not most) IRBs don't have computer scientists that are able to understand the nuances of a given research projects and are therefore ill-equipped to reason about the implications.
I can think of a whole lot of three letter agencies with reasons to do that, most of whom recruit directly from universities.
Could a number of seemingly unrelated individuals introduce a number of bugs over time to form and exploit without being detected?
Doing code review at work I am constantly catching blatantly obvious security bugs. Most developers are so happy to get the thing to work, that they don't even consider security. This is in high level languages, with a fairly small team, only internal users, and pretty simple code base. I can't imagine trying to do it for something as high stakes and complicated as the kernel. Not to mention how subtle bugs can be in C. I suspect it is impossible to distinguish incompetence from malice. So aggressively weeding out incompetence, and then forming layers of trust is the only real defense.
"Can open-source maintainers make a mistake by accepting faulty commits?"
In addition to being scummy, this research seems utterly pointless to me. Of course mistakes can happen, we are all humans, even the Linux maintainers.
Both are unethical, disruptive, and prove nothing about the integrity of the organizations they target.
Submitting a fake paper to a journal read by a few dozen academics is a threat to someones ego. It is not in the same ballpark as a threat to IT infrastructure everywhere.
https://duckduckgo.com/?q=facebook+emotional+study&t=fpas&ia...
Any pentester or red team considers their profession an ethical one.
By the response of the Linux Foundation, this is clearly not authorized nor falling into any bug bounty rules/framework they would offer. Social engineering attacks are often out of bounds for bug bounty - and even for authorized engagements need to follow strict rules and procedures.
Wonder if there are even legal steps that could be taken by Linux foundation.