If you end up having engineers do the work of product people, you'd end up with the typical "engineered mess" that might be very fast, and lots of complicated abstractions so 80% of the codebase can be reused, but no user can make sense of it.
Add in LLMs that tend to never push back on adding/changing things, and you wend up with deep technical debt really quickly.
Edit: Ugh, apparently you wrote your comment just to push your platform (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...) which is so trite, apparently HN is for people to push ads about their projects now...
One problem I personally have here is that I write code as a way to reason through and think about a problem. It's hard for me to grasp the best solution in a space until I try some things out first.
Does that mean you need AI subscriptions just to run your backend? That explodes costs even more than opaque cloud pricing. Sweet!
Teams where I work can use Claude Code, Codex, Cursor, and Copilot CLI. Internally, it seems like Claude Code and Codex are the more popular tools being used by most software teams.
If you’re new to these tools, I highly recommend trying to build something with them during your free time. This space has evolved rapidly the past few months. Anthropic is offering a special spring break promotion where you can double the limits on weeknights and weekends for any of its subscription plans until the end of March.
This doesn't make a lot of sense to me even as someone who uses agentic programming.
I would understand not hiring people who are against the idea of agentic programming, but I'd take a skilled programmer (especially one who is good at code review and debugging) who never touched agentic/LLM programming (but knows they will be expected to use it) over someone with less overall programming experience (but some agentic programming experience) every single time.
I think people vastly oversell using agents as some sort of skill in its own right when the reality is that a skilled developer can pick up how to use the flows and tools on the timescale of hours/days.
You prompt it. That's it. Yes, there are better and worse ways of prompting; yes, there are techniques and SKILLs and MCP servers for maximizing usability, and yes, there are right ways to vibe code and wrong ways to vibe code. But it's not hard. At all.
And the last person I want to work with is the expert vibe coder who doesn't know the fundamentals well enough to have coded the same thing by hand.
I’ve seen some folks who are quite productive with these tools, but there is a lot more slop too. On my team on same the code base you see two different team members producing vastly different results.
And, at the end of the day, a person who can program will be better at agentic coding after a couple days than someone who cannot program who has been agentic coding for a year.
Agentic coding is just not all that complicated. It's a deep rabbit hole, sure, but figuring out how to prompt an AI is not that complicated. The harness can be, the skills might be, the subagent architecture maybe. But your organization should be standardizing that stuff. I would hope to God. Catching someone up to speed is very quick.
But, if you hire good engineers, you will be have a competently engineered product. That has always been the case and will continue to be the case. If you hire sales people and product managers, it will not. Again, that's always been the case.
Personally I still believe that despite AI being moderately useful and getting better over time, it's mostly only feasible for boilerplate work. I do wonder about these people claiming to produce millions of lines of code per day with AI, like what are you actually building? If it's then Nth CRUD app then yeah, I see why... Chances are in the grand scheme of things, we don't really need that company to exist.
In roles that require more technical/novel work, AI just doesn't make the cut in my experience. Either it totally falls over or produces such bad results that it'd be quicker for a skilled dev to make it manually from scratch. I'd hope these types of companies are not hiring based on AI usage.
I noticed that some of these roles come from businesses that recently had layoffs and were now asking their staff to "do more with less" so not exactly places people would be eager to work at, unless they have to.
I don't know if this is the new norm but this craziness is not helped by the increase in the number of "AI influencers" pushing the hype. Unfortunately, I've been seeing this on HN a lot recently.
The problem with vibe coding isn't the coding part — it's that people are trying to think through their product while they're building it. That's always been a bad idea. AI just makes the consequences arrive faster.
Good product thinking happens before you touch any tool. What problem are you actually solving? For whom? What does success look like? What are you not building? These questions don't get easier with LLMs — if anything, because you can generate plausible-looking output so quickly, it's easier than ever to convince yourself you've answered them when you haven't.
The vibe coding mess isn't an AI problem. It's a 'skipped the thinking' problem that AI has made cheap enough to do at scale.
So many companies and products were built on a pivot. Why try to imagine the exact problem, user, success, etc. when you can build in seconds (or more reasonably days) and then find the right users' problems to rebuild around from actual users?
It might feel like kicking the can down the road. But it's a much more informed decision when users churn or they don't. And you can reach that decision point just about as quickly as a long thinking/planning phase.
Thinking while building is bad when paying multiple high-dollar humans for months per rebuild. It's arguably better when a full redesign costs you $10.57.
If someone has been doing that for 10 years and learning nothing, that would be a huge red flag. One that will likely become more common has LLM usage increases.
A good company will not try to micro manage you as an Engineer in that way.
E.g., Nobody wants to continue working with someone who create sound effects, movie player, operating system, etc.
What do you mean by this?
They can also waste a lot of time if you end up over-reliant on them, but that's where experience with LLMs comes in. The nature of them means it's not an exact science, but you get a feel for how to best apply them.
Don’t know/care about coding with AI? You’re unhireable now. Grim.
But man, I'm sure glad I left FAANG when I did. All this hysterical squawking over AI sounds utterly insufferable. If Claude was forced upon me at my job I would have likely crashed out at some point.
A decent company wouldn't necessarily look for someone who can type faster or commit 100x more code like the vibers do, but look into how you understand the code.
Vibe coding seems like a religion more than anything to me: engineers go and use techniques for prompting but rarely actually test those techniques relative to other ones. There is especially no evidence that vibe coding is a skill: the people most effective with these tools are people who would've been most effective without them (i.e. it relies on experience and domain knowledge).
If I were currently actively hiring and I wanted to capture skills that likely will translate well to an AI-augmented work strategy, I would focus on "code review" in the interview. I've only seen a handful of truly great, rigorous code reviewers in my career, but AI makes code review supremely important. Unfortunately, most of the real world "agentic coding" I've seen is light in review (lots of LGTMs!).
I think that firms are going to eventually collapse under their own weight unless models continue improving at the velocity slop is merged into main.
I will also note that I do use these tools, mainly as a search engine, and I do so in hiccups (I will use the tools for a month and then completely abstain for 1-2 months). I am worried about undermining my own cognitive fitness by overreliance on these tools.
That's not vibe coding. Imagine if you were hiring a chef and a candidate came in who'd never used a stove. Sure, technically there are other ways to heat food, but it would be a bit odd.
Everyone talking about vibe coding all your dependencies and the problem is that the people who are good with these tools and do get 50% or greater productivity benefits won’t be able to empathize with the people who are bad with these tools and create all the slop.
I think AI encourages people to take side quests to solve easy problems and not focus on hard problems.
That without domain expertise problems will compound themselves. But I dunno, I agree that they’re here to stay.
It'll require stronger and more frequent push back to keep under control.
Just cause you're using an LLM doesn't mean you're "vibe coding".
I regularly use LLMs at work, but I don't "vibe-code", which is where you're just saying garbage to the model and blindly clicking accept on whatever is spit out from it.
I design, think about architecture, write out all of my thoughts, expected example inputs, expected example outputs, etc. I write out pretty extensive prompts that capture all of that, and then request for an improved prompt. I review that improved prompt to make sure it aligns with the requirements I've gathered.
I read the output like I'm doing a deep code review, and if I don't understand some code I make sure to figure it out before moving forward. I make sure that the change set is within the scope of the problem I'm trying to solve.
Excluding the pieces that augment the workflow, this is all the same stuff you would normally do. You're an engineer solving problems and that domain you do it in happens to involve software and computers.
Writing out code has always been a means to an end. The productivity gains if you actually give LLMs a shot and learn to use the tools are real. So yes, pretty soon it's going to become expected from most places that you use the tools. The same way you've been expected to use a specific language, framework, or any other tool that greatly improves productivity.
Some ways an LLM can assist with coding: I recently needed to refactor a bunch of code. Claude was very helpful this and it completed in about 5 minutes what would have taken me a couple of hours by hand.
Also they are very handy when using new frameworks and libraries. As we all know documentation for open source projects is often lacking. Just yesterday I ran into this. I pointed Claude at the projects GitHub repo and had the answers to my questions in just a couple of minutes. Manually I would have been spending a hour or two reading the code to figure out what I needed.
They are very handy when debugging. Get a weird hours that makes no sense. Instead of banging your head against the wall for a few hours, an LLM can help you find the problem much quicker.
We're not concerned about hiring for the 'skill' of using these things, but more as a culture check - we are a very AI-forward company, and we are looking for people who are excited to incorporate AI into their workflow. The best evidence for such excitement is when they have already adopted these tools.
Among the team, the expectation is that most code is being produced with AI, but there is no micromanager checking how much everyone is using the AI coding tools.
But yet again, if you've never touched any form of agentic coding in 2026 that probably says something about your character.
My first experience with it was a year ago and the tests it produced were so horrendously hard to maintain that I kinda gave up, but I imagine that things have gotten a lot better in the last year.
The productivity gains are real, and in some cases they are enormous. It is actively, profoundly stupid to pass on them. You need to learn how to work with AI.