https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
I have similar reservations about code formatters: maybe I just haven't worked with a code base with enough terrible formatting, but I'm sad when programmers loose the little voice they have. Linters: cool; style guidelines: fine. I'm cool with both, but the idea that we need to strip every character of junk DNA from a codebase seems excessive.
One observation I ran across on the use of the em-dash ("—") was that if AI was given training data from writers that were considered good/great, and those writers tended to use em-dashes, then it would be unsurprising that AI 'learned' to use the character.
So the observer said humans should, if they already did so in the past, continue to use the em-dash now and going forward if it was already part of their 'personal style' in writing.
He has a blog, which I think is particularly relevant to this conversation: https://www.patreon.com/c/GreenWizard/posts?vanity=GreenWiza...
IMO his writing style is quite melodramatic. I have asked myself, how much of that is his perhaps overly compensatory tendency to project an articulate voice, and how much of it is applied by his AI tools?
The last time I saw Anton in person I asked him about his writing process, and he said something like, "I just draft it and then ask ChatGPT to make it sound professional or whatever." So after thinking about it for a while, I have decided that this is his preferred voice, so I'll accept it as his voice.
IMO it is not for you to decide how people recast their own voice. Once you adopt that dogma, you're committed to denying other people's experience of discrimination (through the lens of disability's symptoms). Whether or not you participate in that other type of biased discrimination is irrelevant.
Too often, advocates try to smuggle in their preferred policy using stories like this as cover.
I think HN is broadly supportive of these voices, and I think that an "unwritten exception" to this rule is implicit here. But I'm in the camp that making an explicit exception for special circumstances would be a meaningful statement that all voices are welcome.
Even in this comment, I initially wrote the start as "you're wrong", but then had to catch myself and go back and soften it to "that's incorrect", even though the meaning is the exact same. The constant impedance mismatch is tiring.
When it's a matter of a spelling error or two, no problem. But too often I find I've got to read something multiple times before I have any idea what my interlocutor is saying.
Is our hatred of "AI Slop" and greater posting traffic worth handicapping our ability to communicate with each other?
When I receive an LLM written email at work, I start to question every specific detail because I have no idea if it actually came from the writer (and is therefore important), or was inserted as filler by a computer (and therefore irrelevant).
It wouldn’t be as much of a problem if everyone carefully edited the LLM output themselves before sending (although voice, tone, emotional context clues would still be elided).
But in practice that doesn’t happen, it’s just too easy to click send and the time burden gets passed to the other person.
I get: We found no items matching by:dang "own voice"
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
But that's really what you're now enforcing: writing in an easily detectable LLM prose and voice. LLM detection is very difficult especially at small comment scale texts. There is never proof, only telltale phrases. How will this be enforced? What the heck even is "AI"?
The thing that really frustrates me is that I can't put tokens through a transformer in any way in editing my post? I can't have an LLM turn a bare link after a sentence into a [1]? I can't have it literally do nothing more than spell check in an LLM, but could with a rule based model? Or what about other LLMs or SLMs or classic NLP chained together? Or is it just the transformer?
And it is officially sanctioned that people ought to be keeping in the back of their mind "does this feel LLMish?" instead of "is this a good comment that contributes to the discussion?" Maybe LLM prose is so annoying and insufferably sycophantic that even if all the content and logic was sound, it still should be moderated completely out. But the entire technological form is profane and unclean?
I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use. I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.
I suppose, then... goodbye?
After all, there are a ton of different forums where you can have your chatbot talk to other chatbots.
That's a good start already. Don't let the impossibility of the perfect prevent implementing the good.
>I want my comments judged by the contributions they make and do not make to the discussion. If the LLM makes the comment better, it is good. If it makes it worse, it is bad.
Nope, it's all bad. If I wanted the comments of an LLM, I'd ask an LLM.
>I am 100% not interested in participating in a community that seeks to profile and police the technological infrastructure that its members use.
Well, don't let the door hit you on your way out.
There used to be a sort of gentleman's agreement that I could spare the time to read and judge your comment because you went through the effort of writing it.
I'd normally not do this for a text of this length, but just for fun, here's what ChatGPT suggests:
As a non-native speaker, I sometimes use LLMs to help me find wording that conveys my thoughts the way I want them to be understood by the reader. I would never copy the output verbatim, because it often sounds blunt and unlike me, but I’m happy to use grammar corrections or improved phrasing.
- Made the prose flatter.
- Slightly changed the sense ('gladly' and 'happy to' are not equivalent, and neither are 'search for' and 'help me find') in ways that do add up
- Not actually improved anything
Introducing "because" also adds to the clarity without weighing down things or changing the meaning. "Improved" instead of the bland "better" again is an... improvement.
I imagine GP didn't sneak in the tendentious "to fit with and be well received in the hacker news community" in his instructions.
Overall this was a worthwhile assist. I believe (totally understandable) anti-AI animus is coloring a lot of these replies. These tools can be useful when applied sparingly and targeted la GP did. It's true and very unfortunate that often they are used as the proverbial hammer in search of a nail, flattening everything in the process.
> formulate my thoughts like I intend them to be received by the reader
> conveys my thoughts the way I want them to be understood by the reader
there is a way the parent poster constructs their sentences that may sound a little clumsy in a literary sense, but is actually dumbed down
To continue the experiment I have fed the above paragraph to Gemini with this prompt "Fix grammar and wording issues in the following paragraphs, if needed reword to fit with and be well received in the hacker news community."
This experiment highlights the core issue. Every language has its own voice—academic, formal, informal, or intimate. Your rewritten paragraph leans into the notorious "LLM voice": it’s less direct, feels slightly pandering, and strips away the hooks that usually spark further discussion.
Does it? I don't see it. If anything, it is more direct and clear, not less, i.e. "to help me find wording that conveys my thoughts the way I want them to be understood by the reader" instead of the more convoluted "to search for a way to formulate my thoughts like I intend them to be received by the reader". How is it pandering? And how exactly does it remove "injection points"?
It basically chose more precise words where that was possible, resulting in a net improvement, AFAICS.
I have answered something similar before, I struggle on sending messages as I want them to be received, with AI it is even harder, the "taste" of my thoughts, how I like to express, the habits of the phrasing or wording, get lost completely.
So I just never "AI" my content.
- We had to take spelling tests in school
- English speakers make (generally light) fun of other's spelling or grammar mistakes in a casual setting
- In a professional setting, a lot of time is taken to proofread our own emails
- There's de jure spellings for every word
- Some online communities are really weird about pointing out grammar and spelling mistakes (namely Reddit)
Language is meant to be a fluid, evolving thing but I always felt like English was treated the opposite way. Maybe that's also why it's the de facto Lingua Franca.
I do think, and hope, that this rigidity will change thanks to AI. I've started to embrace my mistakes. I care a lot less about capitalization and punctuation in my Slack messages, for example.
I for one don't think I'll ever AI-wash my texts or use AI translations verbatim. If everybody else did, it would certainly be a sad loss of diversity, but IMO it's only going to make the people who put in their own effort stand out more. Hopefully in a positive way. Time will tell if we're a dying breed.
I'm afraid the need for anybody to learn foreign languages will be subject to much change and discussion for upcoming generations.
Must quote the last paragraph of Chapter 2: "Hot and Cold media", from Marshall McLuhan's Understanding Media, which I've double-underlined.
For it simultaneously explains to me; TikTok (quick consume-scroll-like-react-"create" dopamine hit cycles) and LLMs (outsourcing the essential mechanical friction of thinking (which requires all senses, for me at least))...
The essential friction of deliberate, first-party speech-making---misspellings and all---is why voice and conversation contains life.
I don’t think it is so binary black/white though.
I don’t mind if someone who has no command of English uses a translator. But there is a difference between a translator and an AI/LLM.
how hard is it to recognize common idioms and at least state the literal meaning followed by the semantic meaning? there are at most what, a few thousand per language?
Unless they don’t care about learning English which shouldn’t be frowned upon.
Google or Bing translate might not use the exact same words and phrases that LLMs use every single time, so you are better off using those
And LLM does not know context, it makes mistakes a lot more in it. But, it is much cheaper.
I am reminded about a question I posted in a Vintage Apple subreddit. I described the problem and all the steps I took to try and resolve it. In the middle of the text I also hinted that I asked AI and that it gave be a wildly strange answer which I dismissed but that it gave me hints to continue onwards.
The majority of answers were focused around that one sentence and completely ignoring the rest of the post(and even the problem I was posting about). I was ridiculed (sometimes aggressively) for even considering trying the AI. Eventually someone finally answered the question, I thanked them and continued to get downvoted massively.
While I get that the vintage community can attract some colorful characters this was an interesting observation at how badly they reacted to the post. I've since refrained from mentioning AI and furthermore, trying to limit my involvement with communities like that and ironically working on better ways to use AI to solve problems so as to minimize dealing with them(finding ways of providing more system level data to the AI in my prompt).
Also to the people saying that they just let LLM replace phrases: that's the worst you can do. LLM style lies mostly in the phrases, they come from a narrow selection that they tend to use
However, this isn't an entirely new phenomenon. There is a company in Spain called Audens that manufactures croquettes. People prefer hand-made croquettes instead of industrially produced, and they usually can tell the difference by how perfectly regular industrial croquettes are, so Audens developed this method to produce irregular croquettes. Each individual croquette is slightly different, creating a homemade feel that appeals to consumers.
If it's too perfect, it isn't human.