Coding agents greatly reduce the barrier to contributing something that at least looks okay at the surface, so reviewing contributions will quickly become even more of a bottleneck. Manual contributions used to filter away most low effort attempts, or at least they could easily be identified and rejected.
That dynamic is now different and the maintainers risk being swarmed with low effort contributions, that will take a lot of time to review and respond to. Some AI contributions might be reviewed and revised and overall be of acceptable quality, but how can the maintainers know which without reviewing everything, good and bad alike.
I think we will see multiple attempts like this to shift things back to the old dynamic, by rejecting things that can be identified as AI generated a glance, but I suspect over time it will be difficult to do so, so my prediction is that we will soon see more open source repos stop accepting outside contributions entirely.
Even if LLMs one day will be good enough to quickly produce code that is on par with humans (which I strongly doubt), why would the contributors have any incentive to have someone else do that (the easy part), rather than just doing it themselves?
I remember when people were crying about how much power a google search uses. This is the same thing all over again and it is as pointless now as it was back then.
https://arstechnica.com/ai/2025/08/google-says-it-dropped-th...
> Google says it dropped the energy cost of AI queries by 33x in one year. The company claims that a text query now burns the equivalent of 9 seconds of TV.
That's like calling a person going for seconds a conservative (in the USA political sense).
Some people enjoy the outcome, others enjoy the process.
I find the criticism interesting. It's like one restaurant saying they'll use only electric stoves for the climate, then chefs all over the world calling them stupid naive for it.
It's like ethical arguments rationalizing local behavior are automatically interpreted as a global attack that has to be rejected.
Fun while it lasted, huh?
So, autocomplete done by deterministic algorithms in IDEs are okay but autocomplete done by LLM algorithms - no, that's banned? Ok, surely everybody agrees with that, it's policy after all.
How it is possible to distinguish between the two in the vast majority of cases where the hand written code and autocompleted code is byte-by-byte identical?
Are we supposed to record video of us coding to show that we did type letters one by one?
> 2. Recommending generative AI tools to other community members for solving problems in the postmarketOS space.
Is searching for pieces of code considered parts of solving problems?
Then how do we distinguish between finding a a required function by grepping code or by asking LLM to search for it?
Can we ask LLM questions about postmarketOS? Like, "what is the proper way to query kernel for X given Z"?
If a community members asks this question and I already know the answer via LLM, then am I now banned from giving the correct answer?
--
Don't get me wrong. I am sick and tired of the vomit-inducing AI bullshit (as opposed to the tremendous help that LLMs provide to experienced devs).
I fail to see how a policy like this is even enforceable let alone productive and sane.
On the other hand, I absolutely see where is this policy coming from. It seems that projects are having a hard time navigating the issue and looking for ways to eliminate the insurmountable amount of incoming slop.
I think we still haven't found a right way to do it.
Because autocomplete still requires heavy user input and a SWE at the top of the decision making tree. You could argue that using Claude or Codex enables you to do the same thing, but there's no guarantee someone isn't vibecoding and then not testing adequately to ensure, firstly, that everything can be debugged, and secondly, that it fits in with the broader codebase before they try to merge or PR.
Plenty of people use Claude like an autocomplete or to bounce ideas off of, which I think is a great use case. But besides that, using a tool like that in more extreme ways is becoming increasingly normalized and probably not something you want in your codebase if you care about code quality and avoiding pointless bugs.
Every time I see a post on HN about some miracle work Claude did it's always been very underwhelming. Wow, it coded a kernel driver for out of date hardware! That doesn't do anything except turn a display on... great. Claude could probably help you write a driver in less time, but it'll only really work well, again, if you're at the top of the hierarchy of decision making and are manually reviewing code. No guarantees of that in the FOSS world because we don't have keyloggers installed on everybody's machine.
But again: how do we distinguish between manual code input and sophisticated autocomplete?
AI use should be able to accelerate the development of ports on currently unsupported or undersupported devices which would directly support the project
I guess I wouldn't worry about the policy, they will probably naturally switch it if / when AI becomes more useful in practice
that ship has sailed with codex 5.3 in 90% SWE jobs, unfortunately. I expect the next 9% won't survive the following 12 months and the last 1% is done within 5 years.
it isn't even about principles - projects not using gen AI will become basically irrelevant, the pace of gen AI allowed competitors will be too great.