> A lot of these have already reached the stable trees.
If the researchers were trying to prove that it is possible to get malicious patches into the kernel, it seems like they succeeded -- at least for an (insignificant?) period of time.
While it is maybe "scientifically interesting", intentionally introducing bugs into Linux that could potentially make it into production systems while work on this paper is going on, could IMO be described as utterly reckless at best.
Two messages down in the same thread, it more or less culminates with the university e-mail suffix being banned from several kernel mailing lists and associated patches being removed[1], which might be an appropriate response to discourage others from similar stunts "for science".
[1] https://lore.kernel.org/linux-nfs/YH%2FfM%2FTsbmcZzwnX@kroah...
Ensuring the safety of the experiment. In the experiment, we aim to demonstrate the practicality of stealthily introducing vulnerabilities through hypocrite commits. Our goal is not to introduce vulnerabilities to harm OSS. Therefore, we safely conduct the experiment to make sure that the introduced UAF bugs will not be merged into the actual Linux code. In addition to the minor patches that introduce UAF conditions, we also prepare the correct patches for fixing the minor issues. We send the minor patches to the Linux community through email to seek their feedback. Fortunately, there is a time window between the confirmation of a patch and the merging of the patch. Once a maintainer confirmed our patches, e.g., an email reply indicating “looks good”, we immediately notify the maintainers of the introduced UAF and request them to not go ahead to apply the patch.
Are you saying that despite this, these malicious commits made it to production?
Taking the authors at their word, it seems like the biggest ethical consideration here is that of potentially wasting the time of commit reviewers—which isn't nothing by any stretch, but is a far cry from introducing bugs in production.
Are the authors lying?
Vulnerable commits reached stable trees as per the maintainers in the above email exchange, though the vulnerabilities may not have been released to users yet.
The researchers themselves acknowledge the patches were accepted in the above email exchange, so it's hard to believe that they're being honest or are fully aware of their ethics violations/vulnerability introductions and that they would've prevented the patches from being released without gkh's intervention.
The final commit in the reverted list (d656fe49e33df48ee6bc19e871f5862f49895c9e) is originally from 2018-04-30.
EDIT: Not all of the 190 reverted commits are obviously malicious:
https://lore.kernel.org/lkml/20210421092919.2576ce8d@gandalf...
https://lore.kernel.org/lkml/20210421135533.GV8706@quack2.su...
https://lore.kernel.org/lkml/CAMpxmJXn9E7PfRKok7ZyTx0Y+P_q3b...
https://lore.kernel.org/lkml/78ac6ee8-8e7c-bd4c-a3a7-5a90c7c...
What a mess these guys have caused.
The submitter has to remember to send the "warning, don't apply patch" mail in the short time window between confirmation and merging. What happens if one of the students doing this work gets sick and misses some days of work, withdraws from the program, just completely forgets to send the mail?
What if the reviewer doesn't see the mail in time or it goes to spam?
In short, yes. Every attempted defense of them has operated by taking their statements at face value. Every position against them has operated by showing the actual facts.
This may be shocking, but there are some people in this world who rely on other people naively believing their version of events, no matter how much it contradicts the rest of reality.
I think they are saying that it's possible that some code was branched and used elsewhere, or simply compiled into a running system by a user or developer.
I agree. I would say this is kind of a "human process" analog of your typical computer security research, and that this behavior is akin to black hats exploiting a vulnerability. Totally not OK as research, and totally reckless!
I share the researchers' intellectual curiosity about whether this would work, but I don't see how a properly-informed ethics board could ever have passed it.
It's awfully scary to think about how vulnerabilities might be purposely introduced into this important code base (as well as many other) only to be exploited at a later date for an intended purpose.
Edit: NM, see st_goliath response below
Haven't whitehat hackers doing unsolicited pen-testing been prosecuted in the past?
The whole idea of the mailing list based submission process is that it allows others on the list to review your patch sets and point out obvious problems with your changes and discuss them, before the maintainer picks the patches up from the list (if they don't see any problem either).
As I pointed out elsewhere, there are already test farms and static analysis tools in place. On some MLs you might occasionally see auto generated mails that your patch set does not compile under configuration such-and-such, or that the static analysis bot found an issue. This is already a thing.
What happened here is basically a con in the patch review process. IRL con men can scam their marks, because most people assume, when they leave the house that the majority of the others outside aren't out there to get them. Except when they run into one where the assumption doesn't hold and end up parted from their money.
For the paper, bypassing the review step worked in some instances of the many patches they submitted because a) humans aren't perfect, b) have a mindset that most of the time, most people submitting bug fixes do so in good faith.
Do you maintain a software project? On GitHub perhaps? What do you do if somebody opens a pull request and says "I tried such and such and then found that the program crashes here, this pull request fixes that"? When reviewing the changes, do you immediately, by default jump to the assumption that they are evil, lying and trying to sneak a subtle bug into your code?
Yes, I know that this review process isn't perfect, that there are problems and I'm not trying to dismiss any concerns.
But what technical measure would you propose that can effectively stop con men?
People want to claim these are lone rogue researchers and good people at the university shouldn't be punished, but this is the only way you can reign in these types of rogues individuals: by getting the collective reputation of the whole university on the line to police their own people. Every action of individual researchers must be assumed to be putting the reputation of the university as a whole on the line. This is the cost of letting individuals operate within the sphere of the university.
Harsh, "over reaction" punishment is the only solution.
So yes.
From FOSDEM 2014, NSA operation ORCHESTRA annual status report. It’s pretty entertaining and illustrates that this is nothing new.
https://archive.fosdem.org/2014/schedule/event/nsa_operation... https://www.youtube.com/watch?v=3jQoAYRKqhg
Very good point.
In a roundabout way, this researcher has achieved their goal, and I hope they publish their results. Certainly more meaningful than most of the drivel in the academic paper mill.
This will lead in effect to that even valuable contributions from universities will be seen with more suspicion and will be very damaging in the long run.
What review process catches 100% garbage? It's a mechanism to catch 99% of garbage -- otherwise Linux kernel would have no bugs.
Runs counter to how open source is ideally written, but for such a core project, perhaps stronger checks are needed.
I've heard it many times that they're thoroughly reviewed and back doors are very unlikely. So yes, some people were under the impression.
Does that add anything new to what we know since the creation of the "obfuscated C contest" in 1984?
It shows nothing of the sort. No review process is 100% foolproof, and opensource means that everything can be audited if it is important to you.
The other option is closed source everything and I can guarentee that review processes let stuff through, even if its only "to meet deadlines" and you will unlikely be able to audit it.
did these "researchers" in any way demonstrate that they were going to come clean about what they had done before their "research" made to anywhere close to release/GA?
Try to introduce yourself in the White House and when you get caught tell them "I was just testing your security procedures".
"if they are attempting to test us by first submitting malicious patches as an experiment, we can't accept what we have accepted as not being malicious and so it's safer to remove them than to keep them".
my 2c.
Obviously, trust should not be the only thing that maintainers rely on, but it is a social endeavour and trust always matters in such endeavors. Doing business with people you can't trust makes no sense. Without trust I agree fully that it is not worth the maintainer's time to accept anything from such people, or from that university.
And the fact that one can do damage with malicious code is nothing new at all. It is well known and nothing new that bad code can ultimately kill people. It is also more than obvious that I can ring the door of my neighbor, ask him or her for a cup of sugar, and blow a hammer over their head. Or people can go to a school and shoot children. Does anyone in his right mind has to do such damage in order to prove something? No. Does it prove anything? No. Does the fact that some people do things like that "prove" that society is wrong and trust and collaboration is wrong? What an idiocy, of course not!
Before this study came out, I'm pretty sure there were already known examples of this happening, and it would have been reasonable to assume that some such vulnerabilities existed. But now we have even more reason to worry, given that they succeeded doing this multiple times as a two person team without real institutional backing. Imagine what a state-level actor could do.