HN need not offer itself up as a Petri dish for AI writing experimentation. There are startups in that space, and at least one must be YC-funded, statistically speaking. Come back with the outcomes of the experiment you describe and make a case that they should change the rule. Maybe they will! As of today, though, they are apparently unconvinced.
> the average quality might even go down
We have a recent concrete analysis of Show HN indicating support for this possibility, resulting in the mods banning new users for posting to Show HN (something they’ve probably been resisting for close to twenty years, I imagine, given how frequent a spam vector that must be).
> Perhaps you’ll say that human+LLM text will never be as high-quality as human alone
Please don’t put words in my mouth, insinuating the tone my reply before I’ve made it, and then use that rhetorical device to introduce a flamebait tangent to discredit me with. I’ve made no claims about future capabilities here and I’m not going to address this irrelevance further.
> in the long term, we will have to come up with more sophisticated criteria
Our current criteria seem sophisticated already. Perhaps you could make a case that AI-assisted writing helps avoid guideline violations. This one tends to be especially difficult for us all today:
”Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith. Eschew flamebait. Avoid generic tangents.”