However, with the recent chat based AI models, this agreement has been turned around. It is now easier to get a written message than to read it. Reading it now takes more effort. If a person is not going to take the time to express messages based on their own thoughts, then they do not have sufficient respect for the reader, and their comments can be dismissed for that reason.
Better to post your stream of thought.
Using LLMs to turn stream of thoughts into prose is mostly just adding fluff and expanding the text to make it look more like thoughtful prose. What you get looks nice to the creator because they agree with what it's saying, but it wastes other reader's time as they have to dissect the extra LLM prose to get back to the author's stream of thought.
Just post what you're thinking, even if it's not elegant prose. Don't have an LLM wrap it in structures and cliches that disguise it as something else.
I value reading novel and interesting thoughts and ideas. I don't feel "tricked" when I read something of substance or thought provoking, even if LLM generated and decorated with the platitudes and common forms for dull readers.
For now I would argue when ai edits for you instead of helping you edit. Take a look at the examples that Dang posted if you have not yet: https://news.ycombinator.com/item?id=47342616
The first 5 I looked at were pretty egregious and not subtle.
It is, by way of being extremely dishonest in at least two ways:
- there's no way you would do this if you were required to disclose that you used an LLM to write your comment.
- therefore, if your primary goal isn't communication, then you must be doing it to look smart and "win" the conversation
Same reason people desperately post links to scientific papers they don't understand in a frantic attempt to stay on top of some imaginary debate.
There are a lot of people who have no time for something like Infinite Jest and even getting through the first few chapters is an effort. But at least they tried. An LLM excluding the possibility of reading this book because it is 1000 pages of postmodern absurdity effectively optimises away the fringes of human creativity and leaves only the average stuff behind.
AI slop detectors already exist and are no better than snake oil, because a person can have an LLM-smelling writing style without actually using AI. After all, LLMs were originally trained on human input.
That reminds me of the gmail LLM usage where AI can writes your emails for you and also summarize incoming ones. Maybe we lost the thread somewhere...
If the generation merely restates the prompt (possibly in prettier, cleaner language), then usually it's the case that the prompt is shorter and more direct, though possibly less "correct" from a formal language perspective. I've seen friends send me LLM-generated stuff and when I asked to see the prompt, the prompts were honestly better. So why bother with the LLM?
But if you're using the LLM to generate information that goes beyond the prompt, then it's likely that you don't know what you're talking about. Because if you really did, you'd probably be comfortable with a brief note and instructions to go look the rest up on one's own. The desire to generate more comes from either laziness or else a desire to inflate one's own appearance. In either case, the LLM generation isn't terribly useful since anyone could get the same result from the prompt (again).
So I think LLMs contribute not just to a drowning out of human conversation but to semantic drift, because they encourage those of us who are less self-assured to lean into things without really understanding them. A danger in any time but certainly one that is more acute at the moment.
(While the patterns may be similar, I have a tendency to be more loquacious due to my larger token limit! %)
Think about that for a minute. 4chan would make fun of the comment you just made.
Email mods instead: hn@ycombinator.com