There are a few reasons why I don't believe AI will replace programmers anytime soon:
1. The job of a developer/engineer entails so much more than writing code. Figuring out what the business wants, turning that into a good (system) design, etc. takes up more time than the actual coding itself. Unless of course you take "programmer" literally, but I have not seen many companies that still hire programmers in the most narrow sense, that only focus on writing code.
2. Support and maintenance is a huge part of the job that I don't see AI doing. Theoretically you could let humans focus on that part, but I believe support and maintenence will become much more costly if the people doing they job have no familiarity with the code because they didn't write it.
3. As evidenced by many comments in the thread elsewhere on HN about the announcement of Claude Sonnet 3.7 AI still routinely makes mistakes that are super easy to spot and verify. As long as that remains the case, it's going to be detrimental to the success of you company if you give AI too much autonomy.
I know people will argue that AI is evolving so fast that the above will be solved soon. But I think all three aspects I mentioned are such fundamental roadblocks that they won't be solved soon.
What I do believe in is engineers becoming so much more productive as AI evolves.
There are a few reasons why I don’t believe cars will replace horses anytime soon:
1. Riding and caring for a horse is about much more than just transportation. Horses have been an integral part of life for centuries—they provide companionship, work the land, and serve in countless roles beyond simple travel. Even if you consider only their use for getting from place to place, riding is a skill that people take pride in, and I don’t see that disappearing overnight.
2. The maintenance and upkeep of these machines seem like a nightmare. A horse may need food and care, but it doesn’t require expensive parts, specialized fuel, or constant repairs from trained mechanics. If a carriage breaks, any competent craftsman can fix it—but if one of these new engines fails, who will know how to repair it?
3. From what I’ve seen, these automobiles are still prone to frequent breakdowns and failures. They get stuck in mud, they require smooth roads (which hardly exist outside cities), and they are unreliable compared to a well-trained horse. If a machine fails, you’re stranded—whereas a horse will always find its way home.
I know people will argue that these machines are improving rapidly and that soon they’ll overcome these issues. But I think these challenges are fundamental and won’t be solved anytime soon.
What I do believe, however, is that for certain tasks, automobiles may assist in making travel more efficient. But replace the horse entirely? I just don’t see it happening.
Now, AI.. They want to REPLACE human with an device that will do job itself. While this is fine to a extend where we replace boring jobs (still not sure about it, there are people who like those, why not use them?). But if you undermine inteligence, the the very basic asset that made humans dominant life form on this planet, this is regression. No reason to learn, no reason to train. AI will do it, big button "Do It" and smaller "Cancel" thats all.. 5 years old girl can press it and request anything.
I wonder what is the agenda of rich people of this world. Probably something like this. Rich people on top like now, being supported by autonomous robots and factories providing anything they want. On bottom, slums, fighting for survival because they have no income now (no jobs) and slowing disappearing as they are not needed anymore. Congratulations.
In that case, I have just hope that true, self-aware AI will spawn and replace humans. Its just about fucking time.
I don't think your reply spawns meaningful discussion
Tbh I am completely unsure about the AI Programmer debate, I don't have the knowledge of the AI landscape to make an informed decision. For that reason I do what I often do and make a meta-judgement based on the types of arguments made by each side.
Who is arguing that AI will replace programmers? People who are invested in AI, or people who want cheaper labor.
Who is arguing against that? Programmers who want to keep their job.
Not much to draw from that angle.
What KIND of arguments is each side making? Programmers: specific points that touch reality directly. AI Programmer supporters: Typically, arguments are abstract and never touch reality directly, and seem to be motivated by hype more than experience. In the past, there has been cases where the abstract dreamer hype crowd has been right, but typically there are many pre-emptive waves who are wrong(as would be expected, unless you assume that people are incapable of expecting a thing to come before it's time, then there will be waves of people who pick up on a thing before it gets here, and they will be pre-emptive).
For this reason, plus the very limited amount of AI-generated code applied to non-trivial projects that I've seen(which doesn't and shouldn't hold much weight, because I'm not super familiar with the latest tech), I'm feeling like AI replacing programmers is at least a decade off.
I also feel like people are thinking about the problem wrong in general. They are jumping from our current state to a state where we have capable AI programmers without imagining the incremental transformations in work-place structure over time. We've been going through a trend where coding language gets closer to human language since the days of punch cards, and programmers will exist as a job until that trend reaches the point where programmers are "squeezed out". By that I mean, a programmers job is to convert the intentions(not words, important distinction) of the product manager into code, from this perspective they can be considered middlemen. Programmers will exist until the day that AI is so good that a middleman is no longer needed, that a product manager can talk directly to an AI and get the desired results. Knowing how bad product managers are at explaining what they ACTUALLY need, on a concrete literal level, I think this problem is more difficult than people assume. Even if we had AI that produced perfect code that did exactly what was asked of it, I'm not sure if that'd be good enough, precisely because it does EXACTLY what is asked of it.
So you yourself have already seen the demise of the programmer so why are you arguing against it? Software development isn’t going away. But just like we no longer have tweeners in animation, we’ll soon no longer have programmers in software development. Then soon there after we won’t have “front-enders” and “back-ended” the term “full stack” will lose meaning and at the end what we call a software developer will be more akin to what you today call a business analyst than a programmer.
Yes, AI will change the role of software engineers - and it's my personal belief that in the next couple of years this change will be smaller than many people think. But no, AI will not replace engineers like Mark Zuckerberg thinks.
Why not? Because AI makes too many mistakes and AI is not going to support and maintain your code.
As entropy marches on with more AI generated lines of code in the codebase and software, APIs, tooling have breaking changes, will these new class of "vibe coder" / "creator coder" have the means and time to maintain their massive codebase?
I think AI is good for MVP's but if we're talking 10-30M lines of code then it might not be the best tool for this.
Much (most?) of my time as a software engineer has been spent poking absurd holes in customer stories such that they are compelled to provide the actual requirements. This edge case probing is what LLMs are infamously bad at. They are too eager to please. There's not an inner asshole with an aggressive aesthetic preference that was built up over months of interchange with the client.
The constant here is "agency". LLMs inherently lack it. So, it has to come from somewhere. How many layers of abstraction do we need to put in between the will of the customer and the product they paid for?
I think a viable solution could be to use the LLM as a direct bridge between your product and the customer. Tool calling with these new reasoning models is a hell of a drug. It's not that difficult to just write this code. 99% of it is string interpolation. You don't need copilot for this.
I don't understand your use of "inherently" here. Even if you define LLMs as not having agency, I don't see any inherent limitation against tacking agency on top of them. As you alluded to even just a basic loop of `if (!goalAchieved()) {promptWithToolCalling()}` is arguably agency, no?
You actually suggested connecting the LLM directly between the product and the customer, such that the customer specifies the goal. What's stopping tech from going in this direction?
Despite the fact that the distinction is very philosophical, I think the implications are very practical. Without it's own initiating energy everything an AI produces will be a response to an input, and it's response will be constrained by the bounds implied by that input. The specific type of dialectic between the programmer and the person giving requirements, which leads to creating the ACTUAL requirements, is impossible to happen with an AI, because a dialectic requires two opposed agents/forces while an AI is incapable of being an opposing force because it is only a derivative or product of whatever force is providing it's input; basically, it is constrained inside a box defined by the input it is given, and precisely what is needed for true synthesis(new ideas/thoughts, as opposed to an analytic breaking down of the already proposed ideas) is a whole separate box to interact with the one defined by the input.
My explanation is extremely abstract and will probably only make sense to someone who almost agrees with me already, but that's the best I could do. I'm sure there is a more down-to-earth way to explain this but I guess my understanding isn't good enough to find it yet. In my defense I do think this particular issue of agency in AI is one of the most subtle and philosophical problems in the world right now that actual has practical implications.
This is all about suppressing wages, laying off American engineers, and rationalizing many tens of billions wasted on building AI infrastructure no one needed and no one will use.
For me programming was always about expressing my intend.
I don’t think about the instructions the compiler generates. I also rarely think about the expanded form of a template expression.
If ai just acts as an it remediate between me and the compiler by adding jet-another- abstraction between me and the generated instruction, why should I care?
I will still have to somehow explain the machine what it is that I want.
I’m at lost honestly. If not 2025, it would 2030 or 2040. I fucking love software engineering.
Personally, I see robotics as something worth moving towards. It’s the intersection of software, mechanics, electronics, and math.
Maybe it’s just time to move into management…
This nonsense is about recalibrating the SWE labor market and garnering hype for tech. The primary product the technology industry creates is company equities, and their primary customer is anxious CEOs/hedge/pension funds.
Because of that, I wonder if legacy code bases will be less common in the future.
The only prediction I’m confident in is that it’s a bleak future for devs whose skillset consists of languages rather than interests. I’m one of those devs.
Now what to do? I have just finished my undergrad in software engineering and got admitted to Masters, but I feel that's a mistake. At the same time, I never knew what else to do in my life but programming.
Stupid question: how do you become a high level programmer if entry and mid level roles disappear?
"However, this transition presents a paradox: who will oversee and correct AI-generated code? Even the most advanced AI models are prone to errors, necessitating human oversight to ensure reliability and security."
I see a new role for programmers. The ex-coders will oversee quality control and step in as needed in the future.
Programmers will probably have a few more years -less than 10yrs- but long term their role will radically change.
A great many once-manual tasks have already been delegated to a bajillion logic gates and libraries, and a rather large part of the job is managing them to play together.
Someone proposed this nice project AI will fail at:
But how would Zuckerberg know, he has never written anything special.
That’s the whole point, why even bother if AI does it faster?
Do you buy hand crafted furniture? Probably not because even if it’s better, it’s way more expensive.