I think "generated comments" is a pretty hard line in the sand, but "AI-edited" is anything but clear-cut.
PS - I think the idea behind these policies is positive and needed. I'm simply clarifying where it begins and ends.
All this stuff is in flux. I thought a lot about whether to add the "edited" bit - but it may change. What I deliberately left out was anything about the articles and projects that get submitted here. There's a lot of turbulence in that area too, but we don't yet have clarity, or even an inkling, of how to settle that one.
Edit: what I mean is this: while most of those submissions aren't very interesting, some really are. Here's an example from earlier today:
Show HN: Vanilla JavaScript refinery simulator built to explain job to my kids - https://news.ycombinator.com/item?id=47338091
How do we close the aperture for the lame stuff while opening wider for the good stuff? That is far from clear.
If you're going to say that the AI said X, Y, Z, provide a rationale on why it is relevant. If you merely found X, Y and Z compelling, feel free to talk about it without mentioning AI.
However, that's probably not critical enough to formally add to the explicit guidelines, so it's probably fine to leave it in the "case law" realm—especially because downvoters tend to go after such comments.
The comments thing is a lot more intimate in the sense that anyone posting comments is inside the house.
I have a kid with severe written language issues, and the utilisation of speech to text with a LLM-powered edit has unlocked a whole world that was previously inaccessible.
I would hate to see a culture that discourages AI assistance.
> I would hate to see a culture that discourages AI assistance.
Mostly I think the push back is about ai assistance in its current form. It can get in the way of communicating rather than assisting. The cost though is mostly borne by the readers and those not using the AI for assistance. I have seen this happen when the ai adds info and thoughts that were tangental to the original author and I think, but I can not verify times where an author seems to try to dig down on the details but seemingly can not.
These rules are always fuzzy and there's always a long tail of exceptions. All the more so under turbulent conditions like right now. I wrote more about this elsewhere in the thread, in case it's useful: https://news.ycombinator.com/item?id=47342616.
https://news.ycombinator.com/item?id=47326351
Yes, please at least have a carveout for accessibility. I definitely have dictated HN comments in the past, and my flow uses LLMs to clean it up. It works, and is awesome when you're in pain.
It's better to communicate as an individual, warts and all, than to replace your expression with a sanitized one just because it seems "better." Language is an incredibly nuanced thing, it's best for people's own thoughts to come through exactly as they have written them.
So yeah, it can change the character of your writing, even if it's just relatively subtle nudges here or there.
edit: we suggested that he disable that feature to help him learn to write independently, and he happily agreed.
1. A system that suggests words, the child learns the word, determines whether it matches their intent, and proceeds if they like the result.
2. A system that suggests words, and the child almost-blindly accepts them to get the task over with ASAP.
The end-results may look the same for any single short document, but in the long run... Well, I fear #2 is going to be way more common.
It is definitely not true that it is better for a poster to communicate like an individual when it comes to spelling and grammar. People ignore posts that have poor grammar or spelling mistakes, and communications that have poor grammar are seen as unprofessional. Even I do it at a semi-subconscious level. The more difficult or the more amount of attention someone has to pay to understand your post, the less people will be willing to put in that effort to do so.
[It looks like MS Word 97 had the ability to detect passive voice as well, so we're talking 30 year old technology there that predates LLMs -- how far down the Butlerian Jihad are we going with this?]
There is no need for that here beyond maybe spellcheck. Use your own thoughts, voice, and words.
"Your unique human voice is more valuable than a thousand prompt-driven LLM doggerels."
Edit: I already got downvoted. :-) Sure, no one can tell exactly why. Maybe the combination of bad English _and_ talking sh*ce isn't ideal at all. :-D Anyways, I have enough karma, so I can last quite a while..
The quality of my writing varies (based on my mood as much as anything else, I suppose), but when it is particularly good and error-free then I often get accused of being a bot.
Which is absurd, since I don't use the bot for writing at all.
How do you know? Is it possible the downvoters just didn't like what you said?
This is the opposite of how language works. You want people to understand the idea you're trying to communicate, not fixate on the semantics of how you communicated. Language is like fashion - you only want to break the rules deliberately. If AI or an editor or whatever changes your writing to be more clear and correct, and you don't look at it and say "no, I chose that phrasing for a reason" then the editor's version is much more likely to be understood correctly by the recipient.
I just want clean, easy-to-read content and I don't care about the person who wrote it. A tool like Grammarly is the difference between readable and unreadable (or understandable and understandable) for many people.
You could even write a plugin for your favorite web browser to do that to every site you visit.
It seems hard to achieve the inverse that is (would you rather I use i.e.?) rewrite this paragraph as the original author did before they had an AI re--write it to make it clean, (--do you like oxford commas, and em/en dashes! Just prompt your AI) and easier to read
The guidelines state:
> Be kind. Don't be snarky. Converse > Edit out swipes. > Don't be curmudgeonly.
On the best of days I manage to follow the rules, but I'm only human. If I run my comment through ChatGPT to try and help me edit out swipes on the bad days, that's not ok?
I'm not using ChatGPT to generate comments, but I've got the -4 comments to show that my "thoughts exactly as they have written them" isn't a winning move.
There are people here who sit at a desk all day banging out multipage emails for work who decide to write posts of a similar linguistic calibre for funsies.
Meanwhile you have someone in a developing country who just got off a brutal twelve hour shift doing manual labour in the sun who wants to participate in the conversation with an insightful message that they bang-out on a shitty little cellphone onscreen keyboard while riding on bumpy public transit.
You could have a great idea and express it poorly and be penalized for doing so here while someone could have a blah idea expressed excellently and it's showered in replies despite being in some metrics (the ones I think are most important) worse than the other post.
What's the solution for that?
Remember that you're on a message board and you're not actually 'competing' for anything?
I absolutely do not understand this comment. Are you saying that posting is competitive and that comments have "metrics"?
For me, the line is precisely at the point where a human has something they want to say. IMO - use the tools you need to say the thing you want to say; it's fine. The thing I, and many others here, object to is being asked to read reams of text that no-one could be bothered to write.
This is probably ok:
>> On a technical level, you can really only guard against software that changes your semantics or voice. If you're letting it alter the meaning (or meanings) you intend, or if it starts using words you would never normally use, then it's gone too far.
This is probably too far:
>>> On a technical level, it's important to recogn1ize that the only robust guardrail we can realistically implement is one that prevents modifications to core semantics or authorial voice. If you're comfortable allowing the system to refine or rephrase the precise meanings you originally intended — or if it begins incorporating vocabulary that doesn't align with your typical linguistic patterns — then you've likely crossed a meaningful threshold where the output no longer fully represents your authentic intent.
Something to consider is that you can analyze your own stylometric patterns over a large collection of your writing, and distill that into a system of rules and patterns to follow which AI can readily handle. It is technically possible, albeit tedious, to clone your style such that it's indistinguishable from your actual human writing, and can even icnlude spelling mistakes you've made before at a rate matching your actual writing.
AI editing is weird, though. Not seeing a need, unless English isn't your native language.
To be clear, I also think you shouldn't rely on auto-correction or LLMs for correctness (they are great for identifying your mistakes, but I think you should then fix the mistakes yourself, to develop your brain). It's just that "assisted" correctness isn't misleading/harmful in the way that "assisted" tone/character/semantics are.
When a policy is introduced to seemingly guard against new problems, but happens to be inadvertently targeting preexisting and common technology, I don't feel like it is "lawyering" it to want clarity on that line.
For example, it could be argued this forbids all spellcheckers. I don't think that is the implied intent, but the spectrum is huge in the spellchecker space. From simple substitutions + rule-based grammar engines through to n-grams, edit-distance algorithms, statistical machine translation, and transformer-based NLP models.
Ultimately, this comes down to people making a good-faith judgment about how much AI was involved, whether it was just minor grammatical fixes or something more substantial. The reality is that there isn’t really a shared consensus on exactly where that line should be drawn.
You forgot the /s ?
It was asked that if "AI Generated Code" is just code suggested to you by a computer program, where does using the code that your IDE suggests in a dropdown? That's been around for decades. Is it LLM or "Gen AI" specific? If so, what specific aspect of that makes one use case good and one use case bad and what exactly separates them?
It's one of those situations where it seems easy to point at examples and say "this one's good and this one's bad", but when you need to write policy you start drowning in minutia.
IDE code suggestions come from the database of information built about your code base, like what classes have what methods. Each such suggestion is a derived work of the thing being worked on.
I benefit from my phone flagging spelling errors/typos for me. Maybe it uses AI or maybe it uses a simple dictionary for me. Maybe it might even catch a string of words when the conjunction isn't correct. That's all fair game, IMO. But it shouldn't be rewriting the sentence for me. And it shouldn't be automatically cleaning up my typos for me after I've hit "reply". That's on me.
By the same token, what if I have a human editor help me out? What if we go back and forth on how to write something, including spelling, grammar, tone, etc. For example, my wife occasionally asks me to review her messages before sending them because she thinks I speak well and wants to be understood correctly.
The problem is that we are punishing the technology, not the result. Whether it's a human or an LLM that acts as your editor should be irrelevant; what matters is that you are posting your own work and not someone else's. My wife having me write all of her messages for her would be just as dishonest as her having an LLM write all of her messages for her if she always presented them as her own writing. But if she writes the copy and I provide suggests for changes, what's the harm in that? And why should it matter if it's a human or an LLM that provides that assistance?
i type my comments without capitalization like i'm typing into some terminal because i'm lazy and people might hate it but i'm sure they prefer this to if i asked an LLM to rewrite what i type
your writing style is your personality, don't let a robot take it away from you
In fact, I'd argue that lazy commenting is the real problem, which has now been supercharged by LLMs.