I've also been on the other side of this, receiving some spammy LLM-generated irrelevant "security vulnerabilities", so I also get the desire for some filtering. I hope projects don't adopt blanket hard-line "no AI" policies, which will only be selectively enforced against new contributors where the code "smells like" LLM code, but that's what I'm afraid will happen.
Well, we don't receive that many low-quality PRs in general (I opened this issue to discuss solutions before it becomes a real problem). Speaking personally, when it does happen I try to help mentor the person to improve their code or (in the case where the person isn't responsive) I sit down and make the improvements I would've made and explain why they were made as a comment in the PR.
When it comes to LLM-generated code, I am now going to be going back-and-forth with someone who is probably just going to copy-paste my comments into an LLM (probably not even bothering to read them). It just feels disrespectful.
> I hope projects don't adopt blanket hard-line "no AI" policies, which will only be selectively enforced against new contributors where the code "smells like" LLM code, but that's what I'm afraid will happen.
Well, this is a two-way street -- all of the LLM-generated PRs and issues I've seen so far do not say that they are LLM-generated, in a way that I am tempted to describe as "dishonest". If every LLM-generated PR was tagged as such, I might have a different outlook on the situation (and might instead be willing to reviewing these issues but with lower priority).
The "hard-line policy" would then shift from being "used LLM tools" to "lied on the LLM usage disclosure", and it feels a lot less like selective enforcement (from my perspective). Obviously it won't stop these spammy issues/PRs, but neither will a hard-line policy against all AI.
First, my original comment was going to ask if you're looked at what any other reputable repos are doing. Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?
Second, if I was forced to take a stand on AI, I would duplicate the policy from Zig. I feel their policy hits the exact ethos FOSS should strive for. They even ban AI for translations, because the reader is just as capable a participant. And importantly, asking the author to do their best (without AI), and trust the reader to also try their best encourages human communication. It also gives the reader control and knowledge over the exact amount of uncertainty introduced by the LLM, which is critically important to understanding a poor quality bug report from a helpful users who is honestly trying to help. Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet
Finally, you posted the Issue as an RFC. but then explicitly excluded, HN from commenting on the issue. I think that was a fantastic decision, and expertly written. (I also appreciate that lesson in tactfulness :) ) That said, if you're actually interested in requesting comments or thoughts you wouldn't have considered, I would encourage you to make a top level RFC comment this thread. There will likely be a lot of human slop to wade through, but occasionally I'll uncover a genuinely great comment on HN that improves my understanding. Here I think the smart pro-AI crowd that might have an argument I want to consider, but would be unlikely to on my own because of my bias on the quality of AI. Such a comment might would be likely to appear on HN, but the smart people who I'd want to learn from, would never comment on the GH thread now, and I appreciate it when smart people I disagree with, contribute to my understanding.
PS Thanks for working on opencontainers, and caring enough to keep trying to make it better, and healthier! I like having good quality software to work with :)
Well, I posted this as an RFC for other runc maintainers and contributors, I didn't expect it to get posted to Hacker News. I don't particularly mind hearing outsiders' opinions but it's very easy for things to get sidetracked / spammy if people with no stake in the game start leaving comments. My goal with the comment about "don't be spammy" was exactly that -- you're free to leave a comment, just think about whether it's adding to conversation or just looks like spam.
> Specifically popular FOSS projects that are not backed by a company looking to sell AI. Do any of them have a positive Policy, or positions that you want to include?
I haven't taken a very deep look, but from what I've seen, the most common setups are "blanket ban" and "blanket approval". After thinking about this for a few days, I'm starting to lean more towards:
1. LLM use must be marked as such (upfront) so maintainers know what they are dealing with, and possibly to (de)prioritise it if they wish.
2. Users are expected to (in the case of code contributions) have verified that their code is reasonable and they understand what it does, and/or (in the case of PRs) to have verified that the description is actually accurate.
Though if we end up with such a policy we will need to add AGENTS.md files to try to force this to happen, and we will probably need to have very harsh punishments for people who try to skirt the requirements.> Lobste.rs github disallows AI contribution for an entirely different reason I haven't seen covered in your GH thread yet
AFAICS, it's because of copyright concerns? I did mention it in my initial comment, but I think that far too much of our industry is turning a blind eye to that issue that focusing on that is just going to lead to drawn out arguments with people cosplaying as lawyers (badly). I think that even absent of the obvious copyright issues, it is not possible to honestly sign the Developer Certificate of Origin[1] (a requirement to contribute to most Linux Foundation projects) so AI PRs should probably be rejected on that basis alone.
But again, everyone wants to discuss the utility of AI so I thought that was the simplest thing to start the discussion with. Also the recent court decisions in the Meta and Anthropic cases[2] (while not acting as precedent) are a bit disheartening for those of us with the view that LLMs are obviously industrial-grade copyright infringement machines.
[1]: https://developercertificate.org/ [2]: https://observer.com/2025/06/meta-anthropic-fair-use-wins-ai...