We are likely going to get better in judging this new communication and media. But we need much more experience in it, until we can do that properly.
It will be annoying for quite a while, as it was with social media, until we found the places that are still worth our time and attention. But i am hopeful that we will be able to do that.
Until then i am going to work on my AI side project every evening until i deem it ready and bug free. It already works well enough for my own purposes (which i made it for) and my requirements were heavily influenced by my work process. I would never have been able to finish such a project, even with full time working on it over a year without AI.
Six months ago no-one would post a "Show HN" for a static personal website they had built for themselves - it wouldn't be novel or interesting enough to earn any attention.
That bar just went through the ROOF. I expect it will take a few more months for the new norms to settle in, at which point hopefully we'll start seeing absurdly cool side projects that would not have been feasible beforehand.
I suspect a month of AI accelerated work is still enough to make the front page. I don’t see the competition as steeper. I bet it’s about the same per unit time.
What I mean by that is: after reading through a brief description of a project, or a conceptual overview, they are no better than noise at predicting whether it will be worthwhile to try out, or rewarding to learn about, or have a discussion about, or start using day-to-day.
Things on the front page used to be high quality software, research papers, etc. And now it is entirely driven by marketing operations, and social circles. There is no differential amplification of quality.
I don't know what the solution is, but I imagine it involves some kind of weighted voting. That would be a step towards a complicated engagement algorithm, instead of the elegant and predictable exponential decay that HN is known for.
You could criticize a Michelin inspector the same way. The poor bastards have to actually taste the dish and can't decide merit based on menu descriptions alone.
I have zero interest in seeing something that Claude emitted that the author could never in a million years have written it themselves.
Its baffling to me these people think anyone cares about what their claude prompt output.
I apply the same logic to software.
I love computing, and programming. If anything I'm better able to appreciate that now that I no longer care if my work has any impact.
Also write another javascript framework in case it seems easier to create one than take the time to learn one.
I think we're going to implement this unless we hear strong reasons not to. The idea has already come up a lot, so there's demand for it, and it seems clear why.
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
https://news.ycombinator.com/item?id=47077840
https://news.ycombinator.com/item?id=47050590
https://news.ycombinator.com/item?id=47077555
https://news.ycombinator.com/item?id=47061571
https://news.ycombinator.com/item?id=47058187
https://news.ycombinator.com/item?id=47052452
--- original comment ---
Recent and related:
Is Show HN dead? No, but it's drowning - https://news.ycombinator.com/item?id=47045804 - Feb 2026 (423 comments; subthread https://news.ycombinator.com/item?id=47050421 is about what to do about it)
AI makes you boring - https://news.ycombinator.com/item?id=47076966 - Feb 2026 (367 comments)
archive:
https://github.com/ludos1978/ludos-vscode-markdown-kanban/bl...
in progress or undone:
https://github.com/ludos1978/ludos-vscode-markdown-kanban/bl...
and there is much improvement, because if often watch videos next to vibe coding the project and do it in minutes i have time. so history is a huge mess. but i started 6 months ago, with most work done in the last 4 months. i spent about 1000$ of ai budget on it.
When I use Codex to do vibe coding stuff, I don't usually have one big prompt, I usually have it do small things piecemeal and I iterate with it later. Maybe I'm using it wrong but it tends to be more "conversational" and I think that would be harder to share, especially considering I'll do things over dozens of sessions.
I suppose I could keep an archive of every session I've ever opened with Codex and share that, but thus far I haven't really done that.
Granted, I don't really share my stuff with "Show HN".
The limitations you're describing seem mostly to have to do with tooling, rather than objections in principle.
LLMs are non deterministic…
Technically I suppose one should say that the input is the tuple of (prompt, model, X) where X is some bundle of factors that nobody understands, but that formulation doesn't help me think about how we should adapt to this moment.
How would that be feasible for a project of any complexity whatsoever?
> Authors should be asked to indicate categories of AI use (e.g., literature discovery, data analysis, code generation, language editing), not narrate workflows or share prompts. This standardization reduces ambiguity, minimizes burden, and creates consistent signals for editors without inviting overinterpretation. Crucially, such declarations should be routine and neutral, not framed as exceptional or suspicious.
I think that sharing at least some of the prompts is a reasonable thing to do/require. I log every prompt to a LLM that I make. Still, I think this is a discussion worth having.
[1] https://scholarlykitchen.sspnet.org/2026/02/03/why-authors-a...
Btw I realize "prompt" is no longer the right word for this but I don't know what a better term would be. "Conversation with LLM" is too clumsy.
But other distribution strategies exist. You just have to be smarter about finding and getting in front of your core audience.
I'm not sure if this article deserves all that much attention if the standard is a subjective interpretation of what is truly special.. human made, human directed, or not.
Is there some sort of spectrum of not special, kind of special, pretty special, and truly special?
Does it have to be special for everyone or just some people?
Is it trying to say that people by default build and share things for external validation?
The argument about how people are using AI to solve a problem is akin to how people might feel about someone using a spreadsheet to solve a problem.
Sometimes projects are for learning. Sometimes projects are for solving a problem that's small to others, but okay to you to solve.
Insecurity about other people learning to build things for the first time and then continue to learn to build them better might be what this is about, period.
There's always been a great number of problems that never could could quite get the attention of software development.
I've genuinely met non-software folks who are interested in first solving a problem and then solving it better and better. And I think that type of self-directed learning in software is invaluable.
AI makes slop, but humans sure seem to like creating the same frameworks over and in every language and thinking it's progress in some way. But every so often, you get a positive shift forward, maybe a Ruby on Rails or something else.
Chances are they were never worth talking about, but that doesn't mean the journey wasn't worth it.
Solid solutions are being overshadowed by AI slop alternatives which were assembled in a few months with no long term vision; the results look great superficially, but under the bonnet, it's inefficient, expensive, closed, lacks flexibility, experience degrades over time. All the essential stuff that people don't think about initially is what's missing.
It feels like the logical conclusion of peak attention economy; the media fully saturated with junk where junk beats quality due to the volume advantage.
What, were the vibe coded in COBOL?
That is: I don't understand why the use of Claude Code itself renders them unworthy of discussion.
Productizing anything is hard and writing code with AI is basically impossible to do reliably, securely, and at scale unless you're already an expert in what you're trying to do. For example, working on a project now, and it's kind of endearing watching my AI buddy run into every single pothole I ran into when I first started working with Tauri or Rust.
Unless you know what you're doing (and why you're doing it), AI suggestions are in the best case benign, and in the worst case architectural disasters. Very rarely, I'm like "hm that might be a good idea."
I think AI-aided development will raise the bar for products and makes expert engineers like 10x more valuable. Personally, I'm elated that I don't have to write my 4000th React boilerplate or Rust logging helper anymore.
And the real, actual hard work (as in: coming up with new algorithms for novel problems, breaking problems down so others can understand them, splitting up code/modules in a way that makes sense for another person, etc.) will likely never be doable by AI.
Come on. This site keeps promoting negative content.
It wasn't like I couldn't build before, it just makes it easier and a hell of a lot more fun now. I just did an AI side project and it was a blast. https://oj-hn.com
AI isn't going to take your job. People who know AI are.
If your spaces that were previously full of interesting things suddenly become deluged with uninteresting things then that is something to complain about.
Interviews by celebrities predicting AI will revolutionize the economy: 2837191001747
Software and online things I've used that seem to be better than they were before ChatGPT was introduced: 0
- searching: better
- photo edit/enhance/filter: easier and acessible
- text summarization: better
- quick scripts/tools: faster
- brainstorming/iterating ideas: faster
- generating list of names: faster
- rephrasing text: better
- researching topics: faster
- stackoverflow: i'm finally free. won't be missed by me
- coding: debatable but for me LLMs made possible projects that weren't before due to scope or lack of expertise
And I think the same is true for quite a lot of people. It’s a thing I said a few months ago while discussing with someone about AI. “If AI was to go away, would you care? Compare with if smartphones were to go away.”
Most people would not blink twice if AI were to be removed. Try to remove their phones, would not be the same!
The AI mode does at least attempt to list it's sources, but it's extra hoops to jump through.
Google lens image search used to be amazing, I tried a repeat of a search I did before of a piece of art, it showed the same piece but confidently listed the artist and year wrong by about 300 years.
I’ve had relatives do “research” about things I mentioned I needed to do, and they’ve just sent screenshots of the incorrect AI answer.
It’s made google almost entirely useless, there is zero incentive for them to try to make search better (vs incentive to make it worse) and even if they did want to make it better the sheer volumes of slop have made that even harder.
We’ve completely sabotaged out ability to collate information at scale as a civilization, for the benefit of a few companies that were already the largest in the world to begin with. And it turns out, very few people notice or even care about this.
I would not know if they have gotten better or worse cause I don’t use them anymore.
I don't think you can really get any sort of a signal on this?
Nobody is all that sensitive to the amount of features that get shipped in any project, and nobody really perceives how many people or how much time was needed to ship anything. As a user, unless that means a 5x difference in price of some service, you don't really see or care about any of that - and even if there were savings on the part of any developer/company, they'd probably just pocket the difference. Similarly, if there's a product or service that exists thanks to vibe coding and wouldn't have existed otherwise, you probably don't know that particular detail.
Even when fuckups and bugs do happen, there's also no signal whether it's explicitly due to AI (or whether people are scapegoating it), or just management pushing features nobody wants and enshittifying products and entire industries for their own gain.
Well, maybe StackOverflow is a bit easier to host now: https://blog.pragmaticengineer.com/stack-overflow-is-almost-...
A bunch of things got obsolete with emergence of good LLMs, especially in research and working with text tasks. See usage graphs of Stackoverflow, Grammarly and others.
LLMs put the information from StackOverflow into an arguably more helpful format, but they're still heavily dependent on human input. The LLM must periodically harvest info from human communities to stay up-to-date with technological progress. If those communities die, the LLM will not compensate, and other communities will arise to replace them. Those communities may have different attitudes towards the LLMs that killed their ancestors.
People may lack ideas of interesting projects to work on in the first place; so we need to think about how to help people to think of useful and interesting projects to work on
Related to that idea, people may need to develop skills for building more "complex" ideas and we may need to think about how to make that more possible in the era of AI usage... even if some AI agent can take care of a technical side of things, it still takes a kind of "complexity of thought" to dream up more complicated / useful / interesting projects (I get the impression that there may be a need for some kind of training of the mind necessary for asking for an "automobile" rather than a "faster horse", by analogy... and that conception was often found through manually tinkering with intermediate devices like the bicycle. Hence an AI could "one shot" something interesting but what that thing is is limited by the imagination of the user, which may be limited by technical inability - in other words, the user may need to develop more technical ability in order to be able to dream up more interesting things to create, even if this "technical ability" is a more "skilled usage of AI tools")
There needs to be some way to filter through "noise". That's not a new issue and... a lot of these questions or complaints often feel very "meta", as in - you could just ask AI how to make side projects more interesting or useful, or how to create good filters in the age of AI. In sports, there are hierarchical levels of competition - likewise here you might have forums that are more closed off to newcomers if you want to "gatekeep" things, and they have to compete in "local pools" of attention and if they "win" there, then their "qualified authority / leader" submits the project to a higher level, and so on. AI suggests using qualified curators to create newsfeeds or to act as filters of the "slop".
It’s why I only focus on hardware actual “hacking” projects, more fun to read and follow, and I know it wasn’t vibecoded too.
Just because you can't separate the noise from signal with that easy check doesn't mean these people can't get the joy of side projects. It's especially lazy when that project is open-source and you can literally ask CC, hey dig into this code, did they build anything interesting or different? Peter's side projects like Vibetunnel and Openclaw have so many hidden gems in the code (a rest API for controlling local terminals and Heartbeat, respectively) that you can integrate into your own project. Neglecting these projects as "AI slop" is stopping you from learning what amazing things people are building, especially when those people have different expertise. Lest we forget, the transformer model came from Alphafold and sometimes the best discoveries come from completely unrelated fields.
I would love to meet these people that are getting joy out of seeing other peoples random fucking vibe coded apps that have zero rigor or skill applied to them lol.
To write such quick note about AI slop a few days after they had a show hn with something that looks a bit like AI slop amused me.