I wish the team can either restrict new accounts from posting or at least offer a default filtering where I can only see posts from accounts with certain criteria.
I don’t want to see HN becoming twitter, which is full of bots and noise, as this would be a really sad day.
I do think this is relevant though: "HN can't be immune from macro trends" - https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
Not too seldom have I seen the author or a significant party of a story chime in through a fresh green account, as they were alerted by the story being posted here one way or another. And usually when they do it's very interesting.
As such I would find it detrimental if they had to jump through too many hoops so they don't bother or it takes too long so the thread dies before they can participate.
I still remember creating my HN account. It stands out in my memory, because it was the smoothest, simplest, easiest, and quickest account creation of my life.
I had lurked here for around a decade before finally creating an account. Any urge to participate was thwarted by my resistance toward creating accounts (I just hate account creation for some reason). But HN's account creation process was a breath of fresh air. "You mean it can be this easy? Why isn't it this easy everywhere? If I had known how simple it was, I would have created an HN account years earlier, lol."
It was especially stunning to me, because I think the discourse on HN is generally of a higher quality than most other sites (which I wouldn't naturally associate with such an easy account creation process).
It's my only fond memory of account creation (along with maybe when I created an account on America-Online back in the 90s, since that was my first ever account and it was all so novel). Just a few quick seconds, and then I'm already commenting on HN. It was beautiful. I remain delighted.
1. ideological and/or economically motivated actors will just see it as a cost of doing business.
2. Ordinary sign-up friction is more likely to make HN appear ordinary to anyone who stumbles upon it.
3. Sign-up friction is a moat. The strength of HN is moderation of what gets in.
When given a conversation about Alice and Suzy having a one-upmanship conversation (my husband rich, my kid is a genius) and what emotions they are feeling, and what Suzy could have said instead to improve the conversation, it gave accurate responses (e.g. they're feeling insecure, competitive, envy).
The standard solution is using an email to register account, maybe a cloudflare captcha, and then using good network logging to group accounts by IPs and chainbanning abusive accounts when they are caught by other mechanisms.
At least new accounts are more obvious here. This pattern has been increasingly used for scams, spam and AI slop on Instagram, X and Facebook for years.
I hear you that it's not great for users who are genuine HN readers but haven't posted before. I wish we had a better idea what to do for those cases!
By focusing or restricting human only use you risk dehumanising those he need technological support.
More here in case useful:
https://news.ycombinator.com/item?id=47342616
My initial thought is to set up a devoted account like "sock_puppet_detector", and using the infrastructure from https://hackersmacker.org/, add any likely sock-puppets as 'foes'.
A lot of users don’t seem to realize that anyone can click on the domain in a "Show HN", and Hacker News will show you all the times that domain has been submitted. So you’ll see four or five different low karma sock puppets accounts that have all submitted the same site.
The HN culture has shifted drastically over the past 5 years.
What would be best is for you to poke around the site a bit and get familiar enough with it to decide if you'd like to be a part of the community or not. If so, you're welcome! you aren't the first person to feel a bit lost here as a new user, because the site is rather minimal and cryptic—but your eyes will adjust if you keep reading it over time.
If, on the other hand, you're not interested, that's totally ok, but then please don't try to promote your projects here. HN is a community, and the way to get attention for your things is to first give attention to other people's things.
I don't want to specify X, Y, Z criteria technically because that would just be an invitation to game the system. Worse, Gemini will then tell you "first do X, then do Y and Z, and then you'll get that 'real quality feedback'".
What I want Gemini to tell you (and everyone else!) is "don't use Hacker News primarily for promotion - they have a rule against that. Instead, participate in the community for the intended reason—intellectual curiosity—and after a while, it will become clear how the culture works and how to share your projects there".
What's needed is to get to know this place a little and how it operates, which means taking the time to poke around and explore. I realize HN is rather cryptic at first, but it doesn't take that long for one's eyes to adjust.
I don't want to make HN harder for legit new users, but I do think a bit of community participation is reasonable before posting a Show HN, so it isn't just a box on some "how to promote your project" checklist.
It's easy for people to game but it's at least one more effort-based hurdle.
I'm not a fan of moltbots / openclaws (and any clones that popped up in the last moth). I don't use them and try to discourage their use. That being said, millions of them are running anyway...
Can't allow low-quality posting from new accounts here but thank you for listening to the concerns.
A new human user will spend actual time creating a thoughtful and helpful post, only to be greeted by "sorry, your post has been removed by automod because you don't meet criteria". They get disheartened and walk away forever.
The spammers, on the other hand, know how the rules work and so will just build their bots to work around this (waiting 30days, farming karma).
The net result is that these rules ensure that much greater proportion of new accounts come from bad actors - who else would jump through hoops just to participate on a web forum?
Not to mention reddit mass removed experienced moderators when all the moderators had a protest about reddit removing their access to good third party tooling.
That's the day the site started its death spiral.
And on top of that, some of said "volunteers" are power-hungry, petty, useless fucking morons. Especially the large subreddits tend to be run by people I wouldn't trust to boil some pasta without triggering a fire alert, and yes I know people who manage that.
I still love Reddit for all its flaws though.
IMO New accounts should be restricted from creating new posts, or at least certain kinds of new posts.
Replying shouldn't be restricted. That is how users interact with each other and learn the etiquette of HN.
No information about what threshold I need to cross, what the requirements are, what I need to do to post my project.
Very cool.
If "farming karma" is a thing, maybe that forum deserves what is coming. Either the karma mechanic is inappropriate given the demographic, or it is too hard for the users to avoid upvoting bots.
this is the reason I never was keen on StackOverflow etc
tried posting there several times, many times actually - every time some annoying condition was not met
well screw you too then! walked away and never bothered to contribute again
Even for posts that are interesting to me, I get the feeling that it's not worth looking at because it was probably made using LLMs. Nothing against them, but I personally thought of Show HNs as doing something for the love of it, the end result being a bonus.
I'm not opposed to AI automating away stuff no one liked doing, or even more utilitarian things in general, but robots posting on social media and discussion sites seems antithetical. I don't know what the point of talking to a robot would be when I could talk to Claude if I wanted to do that.
I'm not even 100% sure why people are doing Show HN for low-effort stuff shit that was done in 45 minutes in Claude. I guess it's trying to resume-pad or build a brand or something?
Github star farming, SEO, etc
So I guess I'm saying, the ideal rate of Show HN posts has probably gone way up. Unfortunately its also resulting in lower SNR. Not sure what to do about it tho.
It does take the handcraft out of it, in that sense an LLM-made tool would be more akin to IKEA stuff compared to a handcrafted work of art (though I struggle to call even hand-made electron crap a work of art, lol).
But yeah I know what you mean, they are usually half-finished solutions.
This is the big one for me. Small toy website someone has made as a passion project used to be the big draw of HN for me but now I just a assume it's a vibe-coded mess that'll 404 in 7 months.
I'm using a new account and will likely use one forever, as I don't want lots of posts linked together, nor do I care about points or karma or whatever it's called. My first few comments are always shadowbanned. I also see lots of dead posts for new accounts with "showdead" turned on. A lot of them are normal, useful comments, some are inflammatory or just plain stupid. I haven't seen many comments that seem to be AI generated. Maybe they are and I just don't see it, idk.
Anyway, if a comment passes some basic filter (doesn't post shady links or talk about VIAGRA or 11 INCH PENIS or something spammy), I hope they still show up, even as "dead". On this account I copied 1 dead comment to give it more visibility and I've done it before a few times, too. The comment is still dead, btw (id 47262467). And maybe instead of (shadow)banning new users/posts, just make a separate view for old/established account and another one for all posters.
I would also be glad if I could solve some CPU- or RAM-intensive task as PoW. If I really had to, I'd pay with Monero or something similar, as long as it's an anonymous currency with low fees so a payment equivalent to 25 cents wouldn't incur a big fee. I wouldn't pay more per account (especially when I rotate them), as I've been a lurker for years and only recently started posting, anyway (so I don't care that much if I can post).
Finally, thanks for letting us sign up over Tor. :)
In the current system people can vouch for dead posts from shadowbanned new accounts, if I understand correctly. It seems people do it, to a certain degree at least, because I rarely see good comments that stay dead forever.
EDIT: I meant (but totally forgot) to qualify that my "proposal" would only apply when the LLM-ness is self-obvious—idk, make up a "reasonable person" standard or something. Presumably, the moderators would err on the side of letting things slide. Even so, many comments I've seen are simply impossible for any reasonable person to claim as "human-written"—the default ChatGPT style is simply too distinct.
It pretty much is. It’s not hard and fast (sometimes we’ll warn people or email them to ask if it’s not certain) and it takes time for us to see things and act, especially when people don’t email us when they see these comments.
But as a general rule, accounts that post generated comments get banned.
I'm joking, of course. If your comment was generated by Eliza it would have started with "How do you feel about 'I think your comment...'" :)
We had people defending the fired Ars Technica guy, even though he admitted to using an LLM in some sort of a contrived non-apology along the lines of "I did it because I had a cold".
My main problem with that is that you can just generate an infinite supply of LLM op-eds about LLMs, and is this really what we want to read every day? If I want to know what ChatGPT thinks about the risks or benefits of vibecoding, I'll just ask it.
And it's becoming more and more difficult - not just by AI getting "better" (and training removing many of the telltale signs), but also because regular people "learn" to write like an AI does. We're seeing it with "algospeak" - young terminally online people literally say stuff like "unalived" in the meatspace nowadays.
We're living in a 1984 LARP.
Some is also horribly easy. If the text is full of:
- Overly positive commentary and encouragement
- Constant use of bullet point lists, bolding and emoji
- This quaint forced 'funniness', like a misplaced attempt at being lighthearted
- A lot of blablah that just missed the point
- Not concise and to the point, but also not super long
Then that really screams ChatGPT to me.
I think it's because this seems to be the default styling of ChatGPT. When people tailor their prompt to be more specific about style it's a lot harder to detect but if they just dump a few lines of instructions about the content into it, this is what you'll get. So the low-effort slop is still pretty easy to detect IMO.
But in practice, I frequently encounter a comment that either screams generic LLM slop or even just as a vague indefinable "vibe" due to one or more telltale signs, so that's red flag #1. Then, I go to the comment history, at that point if it's really a bot/claw/agent or a poster heavily using LLMs I'll usually find page after page of cookie cutter repeats of the exact same "LLM smell" (even if that account has been prompted to avoid em-dashes/lists/etc, they still trend towards repetition of their own style).
At that point a human moderator would have more than enough evidence to ban an account. It's not like we're talking about a death sentence or something. If no clear pattern of abuse from the long term commenting activity, then give them the benefit of the doubt and move on.
Maybe there can be a dedicated 'flag botspam' button?
Then again it's a nuanced issue. I see AI used in a large percentage of writing now, so would this rule apply to the article as well?
I would argue that those cases are really the ones that cause an LLM-specific harm, i.e., which make people feel like they aren't exclusively among fellow humans.
If someone posts something that doesn't clearly read LLM-ish, but is otherwise terrible, it's not really different from if the same terrible thing had been written by hand.
I don't think anyone who objects to LLM comments is really demanding a super-low false negative rate. Just get rid of the zero-effort stuff. For example, recently I've seen a lot of comments from new accounts that are just sycophantic towards TFA and try to highlight / summarize a specific idea or two, but don't really demonstrate any original thought (just, like, basic reading comprehension and an ability to express agreement). And they'll take a paragraph to do so, where a human with the same level of interest in the material might just say "good post" (granted, there's an argument to be made for excluding that, too).
Those low value complaints add nothing to the conversation, and the content didnt make it to the front page because it was bad. If the sole objection is "AI bad", keep it to yourself....its boring.
Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
LLM-assisted-writing doesn't have to be low effort, it can help people express themselves better in many cases. I'd argue that someone who spent their time doing multiple passes with an LLM to get their phrasing just write, has taken obviously more care than the majority of people on HN take before commenting.
And if you don't like the way something is written? Just down vote it. That's true whether or not it's partially/wholly written by an LLM.
And what about users like this, whose comment are very much entirely LLM generated and possibly even a bot? https://news.ycombinator.com/threads?id=BelVisgarra
Hard disagree. I have been learning another language and wouldn’t pretend to write posts after an LLM rewrote it because it is literally lower effort than learning the language correctly.
Like definitionally, you are using a machine to offload effort. I don’t know how you could claim that is not “low effort” when that’s the point of the tool.
> Some people can really benefit from using LLMs to help them write. E.g. non-native speakers.
/heavy sarcasm
That being said, my mother used to insist on hand-written cover letters from job applicants. Her rationale: it takes effort, so it weeds out all the applications from people who are just randomly spraying out applications for jobs they are not qualified for.
So I would propose that, in the ideal world where we could perfectly enforce the rules that we chose, that the rule would be "AI for translation only". If it wrote your content, your comment is gone. If it translated content that you wrote, your comment is still welcome.
What if someone used an LLM to just translate?
> worthy of an instant ban
First, it is not always possible to identify an LLM-generated comment. There are too many false-positives. Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?I have no idea what that could be useful for, but since the Turing test is now essentially beaten maybe its usefulness has come and gone too.
> Imagine if this system was implemented, and one of your comments was identified as LLM-generated and you were instantly banned. How would you feel about it?
It sounds like a fast, efficient, inexpensive and foolproof recipe for destroying a community. Let's use that as a future test: anyone who advocates for it is undeniably trying to destroy HN, so they get downvoted to 1 karma and permanently blocked from voting on anything else.
But in principle I agree with you, the rule for me is 'if it wasn't worth your time to write then it certainly isn't worth 1000x times other people's time to read'.
Wow this is really cyberpunk.
I'll bring my Yubikey!
I'd also like to see an "Order of the White Lotus" community (or Fight Club if you prefer) where people who collectively agree to not use AI against each other can come together. They can still use AI (i.e. out of necessity) just not with other members knowingly.
I suspect whatever form it takes the stakes will be very high to hack yourself into and pollute the space. So the more successful the community becomes, the harder it is to keep in order.
I do like your idea, though.
Local groups have a problem where members admit their friends or pressure others into inviting their friends who are not a net positive, but it feels too impolite to refuse or to kick someone out. Meeting someone in person also develops a sense of a social bond that makes it harder to downvote or flag their posts.
Local groups have always been a haven for affinity fraud, too. Running a scam is easier when you can smile, be charismatic, and pretend to be a personal friend before springing your ask on to your victims.
p.s. @patrickmay: jinx!
This falls apart as soon as you realize that evaluating the text requires far more effort than generating it. If you're spending 2 minutes reading text that took 2 seconds to generate, you already lost.
We have genAI generating videos and the quality sucks compared to human produced and filmed content. People call it out and nobody is going to watch a genAI movie at the theater or binge a genAI TV show. Merit based filtering.
GenAI for music is not as good as human-generated music either. Not a single AI song from Suno or Udio has reached the top40. Not even one. 100% of the songs are human because they are evaluated on merit.
We have SWE and agentic benchmarks to evaluate coding LLMs on merit.
Disclaimer: I am a new account.
Welcome. Illegitimi non carborundum.
The HN user base is not perfect at detecting LLM content but a lot of it does get flagged and downvoted eventually. About once a day I’ll click on a link, realize it’s AI slop, and go back to HN to flag it but discover that it’s already flagged.
If you turn on showdead you can see all of the comments from LLM bots that have been discovered and shadowbanned.
The fallacy in the comment above is simple: It’s taking the current situation and extrapolating to an extreme future, then applying the extrapolated future prediction on to the current situation. The current situation does not represent the extreme future predicted. A lot of the LLM content is easily spotted and a lot of it is a waste of time to read, therefore it’s right to police and ban it. Even if imperfect.
I'm not sure we can. Imagine an AI that 1) creates multiple accounts, 2) spews huge numbers of comments, 3) has accounts cross-upvote, and then 4) gets enough karma on multiple accounts to get downvote privileges. That AI now controls the conversation. Anything it doesn't like, it can downvote to death.
I mean, I'm sure that HN has a "voting ring" detector, but an AI could do this on a sufficient scale to be too large to register as one cohesive group. And I think HN has a "downvote brigading" detector, but if the AI had enough different accounts, I'm not sure that would trigger, either.
The best chance to detect it is just on volume (or perhaps on too many accounts coming from the same IP address or block). But if the AI was patient, I'm not sure even that would work.
That's depressing. I don't want HN to become a bot playground, with humans crowded out. But I'm not sure we can stop it, if it was done on a large enough scale.
The OP is talking about posts, not comments. The simplest solution might be to prevent someone from posting a "Show HN" until they’ve earned twenty-five or fifty karma, to demonstrate that they’ve been actively participating on Hacker News rather than using it solely to promote themselves.
It’s a speed bump at best.
Actively encouraging this will only make things worse.
I'd rather see you gone than the people you complain about.
I didn't actually create my account until 2021? 2022? I can't remember. And I didn't make my first post or even comment until just last week.
While I think a minimum post count or reputation metric could perhaps reduce the AI generated posts, introducing friction also makes it harder for real people to contribute anything meaningful.
Furthermore, what does it matter if it's "AI generated"? Is some AI content ok? What's the pass/fail threshold on human vs AI generated text?
I made a Show post last week where I heavily relied on AI. I'm sure there are some "tells." But even so, I spent more than three hours working on the content of my post and my first response. Would my post have been acceptable to you?
If a human put his effort into it, is proud of it and wants to show it to the world, i'm happy to invest some time to have a look at it and maybe provide some helpful feedback.
I'm not willing to invest my time into evaluating the more or less correct sounding ideas of a ML model.
i.e. only surface stories posted by or upvoted by those you trust, and the inverse with those you distrust.
Then exponentially drop off trust transitively and it could be almost workable.
So maybe we should just be honest about this: our standards have raised. We want to see Show HN posts that require effort and dedication, that require more than a few hours of prompt flogging.
One example: https://news.ycombinator.com/item?id=46884481
That being said, there is an above average, low quality submissions sub-trend, that are obviously trying to plant a money tree. This is largely driven by the "look ma, no hands" Ai tools like OpenClaw, mixed (venn) with the crypto crowd looking to make easy money with near-zero effort.
With that being said, I have definitely seen some real bangers that have large Ai contributions. So I am generally in favor of minimally changing how HN works today. One small change would be adding to the Guidelines and FAQ, giving the agents something to read before posting (such that they know that automated submissions are not allowed[1])
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
I think that's great in moderation as it stimulates ideas and discussions, shows us what folks are working on, etc... but this can't become Product Hunt. The reasons for posting here should be vastly different than posting on Product Hunt.
Also the purpose of Show HN along with HN in general is to spark intellectual curiosity and create interesting conversation, and nothing about LLM generated code does that, because the person who prompted the AI to make it doesn't understand it and can't discuss it in any depth.
This also appears to cause a serious shift in the kind of projects that are submitted (i.e.: towards things that are much more accelerated by AI assistance).
They'd assume this, even if they hadn't used AI, and even if AI didn't have to ability to pull it off.
Other subs are slowly being inundated with hidden history spammers …
Bad times.
Your post advocates a
( ) technical ( ) legislative ( ) market-based ( ) vigilante
approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses
( ) Mailing lists and other legitimate email uses would be affected
( ) No one will be able to find the guy or collect the money
( ) It is defenseless against brute force attacks
( ) It will stop spam for two weeks and then we'll be stuck with it
( ) Users of email will not put up with it
( ) Microsoft will not put up with it
( ) The police will not put up with it
( ) Requires too much cooperation from spammers
( ) Requires immediate total cooperation from everybody at once
( ) Many email users cannot afford to lose business or alienate potential employers
( ) Spammers don't care about invalid addresses in their lists
( ) Anyone could anonymously destroy anyone else's career or business
Specifically, your plan fails to account for
( ) Laws expressly prohibiting it
( ) Lack of centrally controlling authority for email
( ) Open relays in foreign countries
( ) Ease of searching tiny alphanumeric address space of all email addresses
( ) Asshats
( ) Jurisdictional problems
( ) Unpopularity of weird new taxes
( ) Public reluctance to accept weird new forms of money
( ) Huge existing software investment in SMTP
( ) Susceptibility of protocols other than SMTP to attack
( ) Willingness of users to install OS patches received by email
( ) Armies of worm riddled broadband-connected Windows boxes
( ) Eternal arms race involved in all filtering approaches
( ) Extreme profitability of spam
( ) Joe jobs and/or identity theft
( ) Technically illiterate politicians
( ) Extreme stupidity on the part of people who do business with spammers
( ) Dishonesty on the part of spammers themselves
( ) Bandwidth costs that are unaffected by client filtering
( ) Outlook
and the following philosophical objections may also apply:
( ) Ideas similar to yours are easy to come up with, yet none have ever
been shown practical
( ) Any scheme based on opt-out is unacceptable
( ) SMTP headers should not be the subject of legislation
( ) Blacklists suck
( ) Whitelists suck
( ) We should be able to talk about Viagra without being censored
( ) Countermeasures should not involve wire fraud or credit card fraud
( ) Countermeasures should not involve sabotage of public networks
( ) Countermeasures must work if phased in gradually
( ) Sending email should be free
( ) Why should we have to trust you and your servers?
( ) Incompatiblity with open source or open source licenses
( ) Feel-good measures do nothing to solve the problem
( ) Temporary/one-time email addresses are cumbersome
( ) I don't want the government reading my email
( ) Killing them that way is not slow and painful enough
Furthermore, this is what I think about you:
( ) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, assh0le! I'm going to find out where you live and burn your
house down!Such a sad development.
There are still quality submissions by new accounts and HN is good at pulling those needles from the haystack.
Today, for the first time, I actually have something I'd like to contribute: a personal open-source project I built with the community in mind and wanted to share.
But if I create a new account, I get redirected to https://news.ycombinator.com/showlim I also have an older account from 2021 (the one I'm using to comment right now) that I barely used, and that one doesn't let me post to Show HN either. It just says: "Sorry, your account isn't able to submit this site".
I understand the need to limit spam. But shouldn't mechanisms like upvotes, shownew, and showdead already help with that? A full block like this seems to hit not just spammers, but also lurkers and new users who are trying to contribute for the first time.
To me, that risks making the community feel more closed than it should.
345 comments | 64 hidden | 50 blocked | 15 green
So I don't see people who annoyed me for one or other reason in the past, I auto-hide the top 1000 accounts by word count, and I hide all green users. This was trivial to write for myself and I think more people should work on something like this for themselves.the problem is that once this is found out, the circumvention is easy enough to program into bots/LLMS.
are we going to reinvent the voight-kampff test from bladerunner?!?
randusername_2022
I'm right on the boundary of the slopocene, not sure if in or out.
Am I too late to get ahead of the curve and stockpile some, while they're still relatively cheap?
Losing that seems too high of a price to pay. Yes there are AI generated comments, in the past there has been script generated comments. You can report, downvote, or just ignore and move on. I am aware of posts like this existing, but I feel they are being effectively managed.
Try not to be too offended about the notion of these posts existing. Many of them are not malicious, they just caused by users stepping outside what is considered appropriate, but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
Yes, and sometimes some of the HN automatic filters kills the comments. Remember to "vouch" the comments if they are interesting/relevant, a few "vouches" unkill the comments. And in extreme cases, send an email to hn@ycombinator.com so dang/tomhow can take a look and use some magic to fix the problem.
Assuming the mods just auto-ban new accounts and require them to be vouched and to earn minimum karma before being visible, those comments can be vouched up or approved by the moderators. The poster won't know that they've been banned, of course, because that's how shadowbanning works, so the approval process should be seamless for them.
But how often does that happen versus the AI comments and alt account trolling?
>but in a landscape where the footing is quite dynamic, everyone is making their own judgement calls in a field where the consensus is not clear, guidance seems more appropriate than punishment here.
The consensus is and has always been clear. Generated comments of any kind have never been allowed. People just don't care, and that's a problem.
And those comments are malicious in effect if not intent. We're here to have conversations with human beings, the intellectual and emotional connection is important. What is the point of having conversations with a machine, much less not knowing one is having a conversation with a machine? If nothing else, it's dehumanizing and a waste of time.
My initial thought is to set up a devoted account like "sock_puppet_detector", use the infrastructure from https://hackersmacker.org/, and add any likely sock-puppets as 'foes'. Then anyone can install hackersmacker, and add "sock_puppet_detector" as a friend to see sock-puppets highlighted. Likewise for rules violators.
This was the first time I encountered a fairly stringent "prove your human" process. It goes something like this:
1. Send email from an valid e-mail account.
2. Include your name, occupation, the briefest of explanations of why you want to join this Slack, how you heard about us, and that you have read and accept our Code of Conduct.
3. Include links to your BlueSky, LinkedIn, Github, X, or Mastodon — any site demonstrating you’re a human.
Now before anyone says these can all be automated by an agent. Yes you are correct, but I also had a e-mail conversation with Rand himself before joining his Slack. I know you can't do this at internet scale, but perhaps that era is over.
Moderators don't have the capacity (and fairly, it is impossible) to check if they are bots or humans.
There are no good solutions, there are hundreds of thousands of intelligences out there, trained millions of hours on how to scam humans, capable of spitting out text tirelessly and shamelessly, and there will be only more of them, tens, hundreds, thousands times more.
But then again, some of the most prolific, most upvoted accounts on this site constantly flood the site with political content and nothing is ever done about it and they get rewarded for it .. so yeah. I gave up hope a long time ago.
If you look at the leader board (https://news.ycombinator.com/leaders), you'll find a few old accounts that pretty much do nothing but farm links, posting sometimes dozens of times a day, with a very low percentage of comments. Their high "score" isn't an indicator of quality; they just spam enough that a few get some good upvotes, but most of their submissions are low quality.
Each one gets 4-5 karma, a few crack double digits. Post 10 or 20 a day over a year or two and they're five figures. Pure farming.
Bots are recognizable and can be selectively ignored. But an echo chamber that would result from measures like this cannot be, because you cannot see the potential comments and posts that were snuffed because some one didn't bother.
If you want HN to be a place to feel comfortable and your world view to be unchallenged, sure, go ahead. But then we already have reddit.
BTW I am curious, how you figure it out that its AI generated other then green?
Additionally, dang had replied on it: https://news.ycombinator.com/item?id=47050421
1. Exist for some time.
2. Vote on stuff that humans would vote for.
3. Avoiding voting on traps.
4. Comment occasionally and productively.
5. Post to a limited existing audience, and receive upvotes.
6. Post limitedly to a general audience.
7. Post generally.
It’s basic earn a reputation behavior.
It almost feels like new accounts should be treated like new posts -- it is sort of a service that a select few are willing to undertake to upvote interesting stories early on.
I wish even more I could block specific users (there are some highly prolific, high karma users here who are extremely irritating), but that's harder and is probably best handled client side.
Think back to prohibition. Just because we want less public drunkenness doesn't mean it is wisest to ban alcohol. One has to ask: what is the chance the ban is successful? What happens when it cuts the wrong way?
To what degree do we care about (1) "human" versus "AI"; (2) comment quality; (3) sensible methods for revealing social preferences? I care a lot more about the latter two than the first. It doesn't have to be a zero sum tradeoff, but I think it is a good starting question.
Let's have that discuss and not try to solve the human vs AI classification problem.
Same reason that burglars don't typically target security camera stores and robbers don't typically target police departments - it's basically a fast-track to early detection, which disrupts the main objective of the adversary.
In addition, I’ve been here in HN since the late 2000s. Look- it’s a new profile. Also sometimes I use AI to help craft better responses. Do with that what you will.
There are barely any bots on Twitter. There were thousands of thousands of bots before 2023, because the API was free. These days running a bot on Twitter is expensive.
Fun fact: a company I worked for in the past had access to an undocumented partners-only API that allowed us to register unlimited number of accounts. I was personally tasked to handle the integration.
I believe it's a policy or moderation enforcement issue. Such as banning incomprehensible / low value posts whether generated by AI or not.
I think a simple solution (and one that eventually every content platform will have to adopt) is to allow users to tag AI-generated spam. I think that a few years from now this feature should be the norm, like existing basic features on forums such as upvote, downvote, favorites, hide, etc. I know this will require much more development effort than simply blocking new accounts from posting at all. But on the other hand, you can’t block new accounts forever.
From the perspective of usually just swinging into a post from the front page, when I do see green, it's usually overtly political trolling, and dead from the start. So I had assumed new account = everyone sees your post in gray, at least for a week or two.
I don't envy the "Show HN:" case. It can be intractable, story time:
Last week, there was a "Show HN:" post for a GitHub link, made it all the way to #2. It was a Flutter app, written up as if it did all the stuff you'd want from an open source LLM client. I said to myself "geez, I knew I took too long to deliver the thing I've been working on for 2 years. the MVP version is insanely popular."
-- only after digging into the repo for 10 minutes, with domain expertise, did I realize it was a complete Potemkin village, built by Claude. And even then, I was afraid to post something pointing this out because it required domain expertise, and it could have read as negative rather than principled.
All that to say, some subsets of The AI Poster Problem now require having intimate domain expertise and 10 minutes to evaluate it. :/
Additionally, the Claude 4.6s and GPT-5.4s are better than me at posting on HN now. :/ And I've been here 16 years. The past couple days, any comment I write involving some sort of judgement or argument is by Opus 4.6 or GPT-5.4, via: 1) dump HN post into prompt 2) say "I feel $X about this, write me an HN post that communicates this but not negatively".
I'm a little ashamed to admit if you look through my post history, you'll definitely see a repeated pattern over 16 years of someone who is very negative and has a hard time communicating it constructively. They're smart enough now to extrapolate observations in the way I want to, while avoiding my own tarpits.
And beware of what's already in context. Sometimes ideas that seem obvious given antecedents are not so obvious when taken in isolation.
I'm not saying your idea is bad necessarily but giving another perspective.
Suggestion 2: Missing AI hype is better than drowning everything else in AI hype, IMHO. Temporarily block AI themes from "Show HN"
Unfortunately, I was not able to "reorganize" comments/posts in a manner that I felt was particular "better", and didn't keep the plugin, for whatever that's worth.
I think it would be more prudent to overlay a web-of-trust, where accounts which submitted links/comments that you upvoted are then given significantly higher priority in other threads/feeds (unfortunately downvotes are not made apparent on HN, but factoring downvotes would also help.) Exposing your web-of-trust may also assist others in determining trusted content.
Perhaps this web-of-trust approach is dystopian on the order of MeowMeowBeenz, but I have not heard any other practical solutions to the disintegration of trust which is upon us.
Edit: Elsewhere in this thread HackerSmacker was mentioned, which is what I'm describing. That's exciting, I'll be trying it out later.
I find it's worse here now than X. Literally every discussion turns into meta and severely politicized. Certain topics you get flagged out by a mob for stating facts.
At least on X reply bots are not allowed anymore. Blue checks are useless tho.
I disagree, but in any case the easy solution in that case is to use X instead of HN.
> At least on X reply bots are not allowed anymore.
In theory, maybe.
Moderation is already taxing as it is.
I’ve read all of the source and I drove the architecture but it would be a stretch to say I didn’t ask for assistance on things that felt fuzzy or foreign to me. I also have generally stopped typing code. I still don’t think the LLM made the project though, it feels like my decision making.
If the bar for Show HN becomes no AI whatsoever then you’re just going to see a bunch of people covering their AI tracks. I’m reluctant to post it because I’m afraid of getting blasted by the community for using AI. At the same time, it is work that I’ve poured hundreds of hours into, that I’m proud of and that I think would be of interest to HN.
I read the Obliteratus post that made it to the front page the other day and I agree that is pure slop. While it’s frustrating that it took up front page space, it’s evident that the whole community caught on to the sloppiness of it all immediately and called it out. I just don’t think HN wants to set the precedent that no AI code should be shared.
I also saw a week or two ago that someone open sourced a project of theirs that wasn’t open source in the first place. The reason they stated was that they had vibe coded and were embarrassed to be discovered. If you want to get a concept out quickly with AI, you’re now hesitant to open source because of the precedent set by the community. I think that’s a scary thought to me. I would rather know the tools I’m using are AI generated/assisted and make the value judgement on if I trust the code and project owners.
(edit: And thus such bots can't easily discover that they shouldn't post, afaict)
Genuine innovation is what we most want to encourage. That's what Show HN has always been about.
The problem now is that coding assistants have dramatically lowered the bar for getting a product or tool working, without the need for much innovation. We need new ways of identifying projects that are genuinely innovative so that their creators can be fairly rewarded, rather than being drowned out.