How to solve this "issue" without putting too much process around it? That's the challenge.
Sarcasm aside, pentesting/redteaming is only ethical if the target consents to it! Please don't try to prove your point the way these researchers have.
If the researcher has sent these patches under a different identity, that would be just like how malice contributions appear. The maintainers won't assume malice, waste a bunch of time communicating with the bad actor, and may NOT revert their previous potentially harmful contribution.
I too thought like this till yesterday. Then someone made me realize thats not how getting consent works in these situations. You take consent from higher up the chain, not the people doing the work. So Greg Kroah-Hartmancould could have been consulted, as he would not be personally reviewing this stuff. This would also give you a chance to understand how the release process works. You also have an advantage over the bad actors because they would be studying the process from outside.
Yes, and if you do it without a heads-up as well that makes you a bad actor. This university is a disgrace and that's what the problem is and should remain.
To take a more realistic example, we could quickly learn a lot more than today about language acquisition if we could separate a few children from any human contact to study how they learn from controlled stimuli. Still, we don't do this research and look for much more complicated and lossy, but more humane, methods to study the same.
And as for the solutions, their contribution is nil. No suggestions that haven't been suggested, tried and done or rejected a thousand times over.
I also consider Greg’s response just as much a test of UMN’s internal processes as the researcher’s attempt at testing kernel development processes. Hopefully there will be lessons learned on both sides and this benign incident makes the world better. Nobody was hurt here.
The purpose of the research was probably to show how easy it is to manipulate the Linux kernel in bad faith. And they did it. What are they gonna do about it besides banning the university?
And also, if I had to pick between a somewhat inclusive mode of work where some rando can get code included at the slightly increased risk of including malicious code, and a tightly knit cabal of developers mistrusting all outsiders per default: I would pick the more open community.
If you want more paranoia, go with OpenBSD. But even there some rando can get code submitted at times.
I mean, it is no surprise. It is even worse with proprietary software, because you are much less likely to be aware of your own college/employee.
Hell, seeing that the actual impact is overblown in the paper, I think it is a really great percentage caught to be honest, assuming good faith from the contributor.
What? Are you actually trying to argue that "researchers" proved that code reviews don't have a 100% success rate in picking up bugs and errors?
Specially when code is pushed in bad faith?
I mean, think about that for a minute. There are official competitive events to sneak malicious code that are already decades old and going strong[1]. Sneaking vulnerabilities through code reviews is a competitive sport. Are we supposed to feign surprise now?
Bug bounties are more than a different beast: they are a strawman.
Sneaking vulnerabilities through a code review is even a competitive sport, and it has zero to do with bug bounties.