But those are pretty specific cases (For example, discussing AI in healthcare). That's about the only time where I think it's reasonable to post the AI output so it can be analyzed/criticized.
What's not helpful is I've been hit by users who haven't disclosed that they are just using AI. It takes a few back and forths before I realize that they are just a bot which is annoying.
Not all AI prompting is expanding the prompt.
What if the original prompt is 1000 words, includes 10 scientific articles by reference (boosting it up to 10000) , and the AI helps to boil it down to 100 words instead?
I'd argue that this is probably a rather more responsible usage of the tools. And rather more pleasant to read besides.
Whether it meets the criterion is another thing. But at least don't assume that the original prompt is always better or shorter!
One of the most important lessons is not to read as many papers as possible. It's weeding out as many as possible so you can spend your limited grey matter reading the ones that actually matter.
And that's where the LLM comes in handy, especially if it's of decent quality. It's a Large Language Model. Chewing through language and finding issues and discrepancies, or simply whether a paper matches your ultimate query is trivial for them .
It's at least as okay as skimming the original documents and not properly reading them.
I'm just old enough that I was in the middle of the transition from paper (in primary school in the 80s) to online (starting late 90s)
I say this somewhat tongue in cheek, but obviously people should drive to 3 different libraries across 3 countries and read the journals in their own binders (in at least 3 different languages)
In reality: full-text online is convenient. Having an LLM assist with search and filtering is convenient.
I could go back to the old ways. Would you like me to reply in pen? My handwriting is atrocious.
I really prefer modern tools, though. Not everything older is better. Whether you want to read what I write is up to you.
(edit: Not hyperbole. I live in a small country, and am old enough to still remember the 80's as a kid.)
It'd be far better to just have a thread about the best way to get good summaries.
You shouldn't just dump a big pile of slop on someone's plate: the actual trick is to filter it down to the bit that counts. Usually when posting, you should do that for the reader. It's only polite.
So, if we filter out the noise, that leaves you with 100 words and 1 link to a reference. Which is actually about right for a typical HN reply. (run this through wc ;-))
I don't expect AI HN responders to out themselves by sharing, but I would be curious to learn if people are prompting anything more involved than just "respond to this on HN: <link>", or running agents that do the same.
So technically the prompts involved might expand into megabytes all told. And in the end I formulate a post by myself (to adhere to HN rules), but the prompting can be many many many megabytes and include PDFs, images, blocks of text from multiple sources, and ... you know. Just Doing The Work.
I think this is valid. Previously I would have (and have) (and still do) search google, wikipedia, pubmed, scientific literature, etc. Not for everything. But often. And AI tooling just allows me to do that faster, and keep all my notes in one place besides.
Again, the final edit is typically 90-100% me. (The 10% is if the AI comes with a really good suggestion) . But my homework? Yes. AI is involved these days.
This should be ok. I'm adhering to the letter and the spirit. My post is me.
Example: "write me an article about hidden settings in SSH". You get back more information than most of HN's previous posts about SSH, in a fraction of the text, and more readable.
Actually, screw it, we should just make a new version of HN that has useful articles written by AI. The human written articles are terrible.