https://en.m.wikipedia.org/wiki/Rubber_duck_debugging
I think the big question everyone wants to skip right to and past this conversation is, will this continue to be true 2 years from now? I don’t know how to answer that question.
You know that saying that the best way to get an answer online is to post a wrong answer? That's what LLMs do for me.
I ask the LLM to do something simple but tedious, and then it does it spectacularly wrong, then I get pissed off enough that I have the rage-induced energy to do it myself.
I've seen enough people led astray by talking to it.
This has not been my experience. LLMs have definitely been helpful, but generally they either give you the right answer or invent something plausible sounding but incorrect.
If I tell it what I'm doing I always get breathless praise, never "that doesn't sound right, try this instead."
But IDK if somebody won't create something new that gets better. But there is no reason at all to extrapolate our current AIs into something that solves programing. Whatever constraints that new thing will have will be completely unrelated to the current ones.
Before LLMs it was mostly fine because they just didn’t do that kind of work. But now it’s like a very subtle chaos monkey has been unleashed. I’ve asked on some PRs “why is this like this? What is it doing?” And the answer is “ I don’t know, ChatGPT told me I should do it.”
The issue is that it throws basically all their code under suspicion. Some of it works, some of it doesn’t make sense, and some of it is actively harmful. But because the LLMs are so good at giving plausible output I can’t just glance at the code and see that it’s nonsense.
And this would be fine if we were working on like a crud app where you can tell what is working and broken immediately, but we are working on scientific software. You can completely mess up the results of a study and not know it if you don’t understand the code.
For me, it's less "conversation to be skipped" and more about "can we even get to 2 years from now"? There's so much insability right now that it's hard to say what anything will look like in 6 months. "
This is my first comment so I'm not sure how to do this but I made a BYO-API key VSCode extension that uses the OpenAI realtime API so you can have interactive voice conversations with a rubber ducky. I've been meaning to create a Show HN post about it but your comment got me excited!
In the future I want to build features to help people communicate their bugs / what strategies they've tried to fix them. If I can pull it off it would be cool if the AI ducky had a cursor that it could point and navigate to stuff as well.
Please let me know if you find it useful https://akshaytrikha.github.io/deep-learning/2025/05/23/duck...
Another example: saying out loud the colors red, blue, yellow, purple, orange, green—each color creates a feeling that goes beyond its physical properties into the emotions and experiences. AI image-generation might know the binary arrangement of an RGBA image but actually, it has NO IDEA what it is to experience colour. No idea how to use the experience of colour to teach a peer of an algorithm. It regurgitates a binary representation.
At some point we’ll get there though—no doubt. It would be foolish to say never! For those who want to get there before everyone else probably should focus on the organoids—because most powerful things come from some Faustian monstrosity.
I wonder if the term "rubber duck debugging" will still be used much longer into the future.
I still think about Tom Scott's 'where are we on the AI curve' video from a few years back. https://www.youtube.com/watch?v=jPhJbKBuNnA
Looking forward for rubber duck shaped hardware AI interfaces to talk to in the future. Im sure somebody will create it
It's ignorant to think machines will not catch up to our intelligence at some point, but for now, it's clearly not.
I think there needs to be some kind of revolutionary breakthrough again to reach the next stage.
If I were to guess, it needs to be in the learning/back propagation stage. LLM's are very rigid, and once they go wrong, you can't really get them out of it. A junior develop for example could gain a new insight. LLM's, not so much.
They drive you nuts trying to communicate with them what you actually want them to do. They have a vast array of facts at immediate recall. They’ll err in their need to produce and please. They do the dumbest things sometimes. And surprise you at other times. You’ll throw vast amounts of their work away or have to fix it. They’re (relatively) cheap. So as an army of monkeys, if you keep herding them, you can get some code that actually tells a story. Mostly.
Was on r/fpga recently and mentioned that I had had a lot of success recently in getting LLMs to code up first-cut testbenches that allow you to simulate your FPGA/HDL design a lot quicker than if you were to write those testbenches yourself and my comment was met with lots of derision. But they hadn't even given it a try to form their conclusion that it just couldn't work.
I've seen Claude and ChatGPT happily hallucinate whole APIs for D3 on multiple occasions, which should be really well represented in the training sets.
Remember how blockchain was going to change the world? Web3? IoT? Etc etc.
I've been through enough of these cycles to understand that, while the AI gimmick is cool and all, we're probably at the local maximum. The reliability won't improve much from here (hallucinations etc), while the costs to run it will stay high. The final tombstone will be when the AI companies stop running at a loss and actually charge for the massive costs associated with running these models.
Using it to prototype some low level controllers today, as a matter of fact!
Mediocre ones … maybe not so much.
When I worked for a Japanese optical company, we had a Japanese engineer, who was a whiz. I remember him coming over from Japan, and fixing some really hairy communication bus issues. He actually quit the company, a bit after that, at a very young age, and was hired back as a contractor; which was unheard of, in those days.
He was still working for them, as a remote contractor, at least 25 years later. He was always on the “tiger teams.”
He did awesome assembly. I remember when the PowerPC came out, and “Assembly Considered Harmful,” was the conventional wisdom, because of pipelining, out-of-order instructions, and precaching, and all that.
His assembly consistently blew the doors off anything the compiler did. Like, by orders of magnitude.
One major aspect of software engineering is social, requirements analysis and figuring out what the customer actually wants, they often don't know.
If a human engineer struggles to figure out what a customer wants and a customer struggles to specify it, how can an LLM be expected to?
Probably going to have the same outcome.
I actually imagine it's the opposite of what you say here. I think technically inclined "IT business partners" will be able of creating applications entirely without software engineers... Because I see that happen every day in the world of green energy. The issues come later, when things have to be maintained, scale or become efficient. This is where the software engineering comes in, because it actually matters if you used a list or a generator in your Python app when it iterates over millions of items and not just a few hundreds.
Chat UIs are an excellent customer feedback loop. Agents develop new iterations very quickly.
LLMs can absolutely handle abstractions and different kinds of component systems and overall architecture design.
They can also handle requirements analysis. But it comes back to iteration for the bottom line which means fast turnaround time for changes.
The robustness and IQ of the models continue to be improved. All of software engineering is well underway of being automated.
Probably five years max where un-augmented humans are still generally relevant for most work. You are going to need deep integration of AI into your own cognition somehow in order to avoid just being a bottleneck.
Software engineering, is a different thing, and I agree you're right (for now at least) about that, but don't underestimate the sheer amount of brainless coders out there.
It really depends on the organization. In many places product owners and product managers do this nowadays.
Presumably, they're trained on a ton of requirements docs, as well as a huge number of customer support conversations. I'd expect them to do this at least as well as coding, and probably better.
These little side quests used to eat a lot of my time and I’m happy to have a tool that can do these almost instantly.
Also, there are often times multiple ways to achieve a certain style and they all work fine until you want a particular tweak, in which case only one will work and the LLM usually gets stuck in one of the ones that does not work.
Telling, isn't it?
What an awful imagination. Yes there are people who don't like CSS but are forced to use it by their job so they don't learn it properly, and that's why they think CSS is rote memorization.
But overall I agree with you that if a company is too cheap to hire a person who is actually skilled at CSS, it is still better to hoist that CSS job onto LLMs than an unwilling human. Because that unwilling human is not going to learn CSS well and won't enjoy writing CSS.
On the other hand, if the company is willing to hire someone who's actually good, LLMs can't compare. It's basically the old argument of LLMs only being able to replace less good developers. In this case, you admitted that you are not good at CSS and LLMs are better than you at CSS. It's not task-dependent it's skill-dependent.
Companies that try to replace their employees with LLMs and AIs will fail.
Unfortunately, all that's in the long run. In the near term, some CEOs and management teams will profit from the short term valuations as they squander their companies' future growth on short-sighted staff cuts.
That's actually really interesting to think about. The idea that doing something counter-productive like trying to replace employees with AI (which will cause problems), may actually benefit the company in terms of valuations in the short run. So in effect, they're hurting and helping the company at the same time.
“The market can remain irrational longer than you can remain solvent.” — Warren Buffett
The problem is that the software world got eaten up by the business world many years ago. I'm not sure at what point exactly, or if the writing was already on the wall when Bill Gates' wrote his open letter to hobbyists in 1976.
The question is whether shareholders and managers will accept less good code. I don't see how it would be logical to expect anything else, as long as profit lines go up why would they care.
Short of some sort of cultural pushback from developers or users, we're cooked, as the youth say.
Bad code leads to bad business
This makes me think of hosting departement; You know, which people who are using vmware, physical firewalls, dpi proxies and whatnot;
On the other edge, you have public cloud providers, which are using qemu, netfilter, dumb networking devices and stuff
Who got eaten by whom, nobody could have guessed ..
Could most software be more awesome? Yes. Objectively, yes. Is most software garbage? Perhaps by raw volume of software titles, but are most popular applications I’ve actually used garbage? Nope. Do I loathe the whole subscription thing? Yes. Absolutely. Yet, I also get it. People expect software to get updated, and updates have costs.
So, the pertinent question here is, will AI systems be worse than humans? For now, yeah. Forever? Nope. The rate of improvement is crazy. Two years ago, LLMs I ran locally couldn’t do much of anything. Now? Generally acceptable junior dev stuff comes out of models I run on my Mac Studio. I have to fiddle with the prompts a bit, and it’s probably faster to just take a walk and think it over than spend an hour trying different prompts… but I’m a nerd and I like fiddling.
Corporations create great code too: they're not all badly run.
The problem isn't a code quality issue: it is a moral issue of whether you agree with the goals of capitalist businesses.
Many people have to balance the needs of their wallet with their desire for beautiful software (I'm a developer-founder I love engineering and open source community but I'm also capitalist enough to want to live comfortably).
In the long term (post AGI), the only safe white-collar jobs would be those built on data which is not public i.e. extremely proprietary (e.g. Defense, Finance) and even those will rely heavily on customized AIs.
Isnt every little script, every little automation us programmers do in the same spirit? "I dont like doing this, so I'm going to automate it, so that I can focus on other work".
Sure, we're racing towards replacing ourselves, but there would be (and will be) other more interesting work for us to do when we're free to do that. Perhaps, all of us will finally have time to learn surfing, or garden, or something. Some might still write code themselves by hand, just like how some folks like making bread .. but making bread by hand is not how you feed a civilization - even if hundreds of bakers were put out of business.
Where do you get this? The limitations of LLMs are becoming more clear by the day. Improvements are slowing down. Major improvements come from integrations, not major model improvements.
AGI likely can't be achieved with LLMs. That wasn't as clear a couple years ago.
Making our work more efficient, or humans redundant should be really exciting. It's not set in stone that we need to leave people middle aged with families and now completely unable to earn enough to provide a good life
Hopefully if it happens, it happens to such a huge amount of people that it forces a change
Now we have Geoffrey Hinton getting the prize for contributing to one of the most destructive inventions ever.
I think they are hoping that their future is safe. And it is the average minds that will have to go first. There may be some truth to it.
Also, many of these smartest minds are motivated by money, to safeguard their future, from a certain doom that they know might be coming. And AI is a good place to be if you want to accumulate wealth fast.
Right now the scope of what an LLM can solve is pretty generic and focused. Anytime more than a class or two is involved or if the code base is more than 20 or 30 files, then even the best LLMs start to stray and lose focus. They can't seem to keep a train of thought which leads to churning way too much code.
If LLMs are going to replace real developers, they will need to accept significantly more context, they will need a way to gather context from the business at large, and some way to persist a train of thought across the life of a codebase.
I'll start to get nervous when these problems are close to being solved.
I paste in the entire codebase for my small ETL project (100k tokens) and it’s pretty good.
Not perfect, still a long ways to go, but a sign of the times to come.
I never made a Dockerfile in my life, so I thought it would be faster just getting o3 to point to the GitHub repo and let it figure out, rather than me reading the docs and building it myself.
I spent hours debugging the file it gave me... It kept on adding hallucinations for things that didn't exist, and removing/rewriting other parts, and other big mistakes like understanding the difference between python3 and python and the intricacies with that.
Finally I gave up and Googled some docs instead. Fixed my file in minutes and was able to jump into the container and debug the rest of the issues. AI is great, but it's not a tool to end all. You still need someone who is awake at the wheel.
I get having a bad taste in your mouth but these tools _aren't _ magic and do have something of a steep learning curve in order to get the most out of them. Not dissimilar from vim/emacs (or lots of dev tooling).
edit: To answer a reply (hn has annoyingly limited my ability to make new comments) yes, internet search is always available to ChatGpT as a tool. Explicitly clicking the globe icon will encourage the model to use it more often, however.
I don’t think I ever got to write "this api doesn't exist" and then gotten a useful alternative.
Claude is the only one that regularly tells me something isn't possible rather than making sh up.
One thing I know is that I wouldn't ask an LLM to write an entire section of code or even a function without going in and reviewing.
These days I am working on a startup doing [a bit of] everything, and I don't like the UI it creates. It's useful enough when I make the building blocks and let it be, but allowing claude to write big sections ends up with lots of reworks until I get what I am looking for.
LLMs will never be able to figure out for themselves what your project's politics are and what trade-offs are supposed to be made. The penultimate model will still require a user to explain the trade-offs in a prompt.
I wouldn't declare that unsolvable. The intentions of a project and how they fit into user needs can be largely inferred from the code and associated docs/README, combined with good world knowledge. If you're shown a codebase of a GPU kernel for ML, then as a human you instantly know the kinds of constraints and objectives that go into any decisions. I see no reason why an LLM couldn't also infer the same kind of meta-knowledge. Of course, this papers over the hard part of training the LLMs to actually do that properly, but I don't see why it's inherently impossible.
Honestly though, when that replacement comes there is no sympathy to be had. Many developers have brought this upon themselves. For roughly the 25 year period from 1995 to 2020 businesses have been trying to turn developers into mindless commodities that are straight forward to replace. Developers have overwhelmingly encouraged this and many still do. These are the people who hop employers every 2 years and cannot do their jobs without lying on their resumes or complete reliance on a favorite framework.
Maybe it's the way you talk about 'developers'. Nothing I have seen has felt like the sky falling on an industry; to me at most it's been the sky falling on a segment of silicon valley.
1) Starting simple codebases 2) Googling syntax 3) Writing bash scripts that utilize Unix commands whose arguments I have never bothered to learn in the first place.
I definitely find time savings with these, but the esoteric knowledge required to work on a 10+ year old codebase is simply too much for LLMs still, and the code alone doesn't provide enough context to do anything meaningful, or even faster than I would be able to do myself.
Very few people are doing truly cutting edge stuff - we call them visionaries. But most of the time, we're just merely doing what's expected
And yes, that includes this comment. This wasnt creative or an original thought at all. I'm sure hundreds of people have had similar thought, and I'm probably parroting someone else's idea here. So if I can do it, why cant LLM?
If you're getting average results you most likely haven't given it enough details about what you're looking for.
The same largely applies to hallucinations. In my experience LLMs hallucinate significantly more when at or pushed to exceed the limits of their context.
So if you're looking to get a specific output, your success rate is largely determined by how specific and comprehensive the context the LLM has access to is.
Indeed it is likely already the case that in training the top links scraped or most popular videos are weighted higher, these are likely to be better than average.
And what really matters is, if the task gets reliable solved.
So if they actually could manage this on average with average quality .. that would be a next level gamechanger.
IA is neat for average people, to produce average code, for average compagnies
In a competitive world, using IA is a death sentence;
What I mean is that you are right assuming we use a transformation that still while revertible has avalanche effect. Btw in practical terms I doubt there are practical differences.
The whole thing seems like a pretty good example of collaboration between human and LLM tools.
I actually like LLMs better for creative thinking because they work like a very powerful search engine that can combine unrelated results and pull in adjacent material I would never personally think of.
We're being told that llms are now reasoning, which implies they can make logical leaps and employ creativity to solve problems.
The hype cycle is real and setting expectations that get higher with the less you know about how they work.
The programmer is an architect of logic and computers translate human modes of thought into instructions. These tools can imitate humans and produce code given certain tasks, typically by scraping existing code, but they can't replace that abstract level of human thought to design and build in the same way.
When these models are given greater functionality to not only output code but to build out entire projects given specifications, then the role of the human programmer must evolve.
And by better, I don’t mean in terms of code quality because ultimately that doesn’t matter for shipping code/products, as long as it works.
What does matter is speed. And an LLM speeds me up at least 10x.
You expect to achieve more than a decade of pre-LLM accomplishments between now and June 2026?
99% of professional software developers don’t understand what he said much less can come up with it (or evaluate it like Gemini).
This feels a bit like a humblebrag about how well he can discuss with an LLM compared to others vibecoding.
It's about Humans vs. Humans+AI
and 4/5, Humans+AI > Humans.
Neither of these issues is particularly damning on its own, as improvements to the technology could change this. However, the reason I have chosen to avoid them is unlikely to change; that they actively and rapidly reduce my own willingness for critical thinking. It’s not something I noticed immediately, but once Microsoft’s study showing the same conclusions came out, I evaluated some LLM programming tools again and found that I generally had a more difficult time thinking through problems during a session in which I attempted to rely on said tools.
In my experience working on enterprise app modernization, we’ve found success by keeping humans firmly in the loop. Tools like Project Analyzer (from Techolution’s AppMod.AI suite) assist engineers in identifying risky legacy code, mapping dependencies, and prioritizing refactors. But the judgment calls, architecture tweaks, and creative problem-solving? Still very much a human job.
LLMs boost productivity, but it’s the developer's thinking that makes the outcome truly resilient. This story captures that balance perfectly.
Increase the temperature of the LLMs.
Ask several LLMs, each several time the same question, with tiny variations. Then collect all answers, and do a second/third round asking each LLM to review all collected answers and improve.
Add random constraints, one constraints per question. For example, to LLM: can you do this with 1 bit per X. Do this in O(n). Do this using linked lists only. Do this with only 1k memory. Do this while splitting the task to 1000 parallel threads, etc.
This usually kicks the LLM out of its confort zone, into creative solutions.
How did it help, really? By telling you your idea was no good?
A less confident person might have given up because of the feedback.
I just can't understand why people are so excited about having an algorithm guessing for them. Is it the thrill when it finally gets something right?
These posts are gonna look really silly in the not too distant future.
I get it, spending countless hours honing your craft and knowing that AI will soon make almost everything you learned useless is very scary.
Now that the project has grown and all that stuff is hammered out, it can't seem to consistently write code that compiles. It's very tunnel visioned on the specific file its generating, rather than where that fits in the context of what/how we're building what we're building.
We'll find new ways to push the tech.
If software is about meeting human demands, humans will always write its requirements, by definition. If we build another machine like LLMs, well the design of those LLMs is subject to human demands. There is no point at which we can demand perfection but not be involved in its definition.
LLMs are great as assistants. Just today, Copilot told me it's there to do the "tedious and repetitive" parts so I can focus my energy on the "interesting" parts. That's great. They do the things every programmer hates having to do. I'm more productive in the best possible way.
But ask it to do too much and it'll return error-ridden garbage filled with hallucinations, or just never finish the task. The economic case for further gains has diminished greatly while the cost of those gains rises.
Automation killed tons of manufacturing jobs, and we're seeing something similar in programming, but keep in mind that the number of people still working in manufacturing is 60% of the peak, and those jobs are much better than the ones in the 1960s and 1970s.
And also, manufacturing jobs have greatly changed. And the effect is not even, I imagine. Some types of manufacturing jobs are just gone.
Translation software has been around for a couple of decades. It was pretty shitty. But about 10 years ago it started to get to the point where it could translate relatively accurately. However, it couldn't produce text that sounded like it was written by a human. A good translator (and there are plenty of bad ones) could easy outperform a machine. Their jobs were "safe".
I speak several languages quite well and used to do freelance translation work. I noticed that as the software got better, you'd start to see companies who instead of paying you to translate wanted to pay you less to "edit" or "proofread" a document pre-translated by machine. I never accepted such work because sometimes it took almost as much work as translating it from scratch, and secondly, I didn't want to do work where the focus wasn't on quality. But I saw the software steadily improving, and this was before ChatGPT, and I realized the writing was on the wall. So I decided not to become dependent on that for an income stream, and moved away from it.
When LLMs came out, and they now produce text that sounded like it was written by a native speaker (in major languages). Sure, it's not going to win any literary awards, but the vast vast majority of translation work out there is commercial, not literature.
Several things have happened: 1) there's very little translation work available compared to before, because now you can pay only a few people to double-check machine-generated translations (that are fairly good to start with); 2) many companies aren't using humans at all as the translations are "good enough" and a few mistakes won't matter that much; 3) the work that is available is high-volume and uninteresting, no longer a creative challenge (which is why I did it in the first place); 4) downward pressure on translation rates (which are typically per word), and 5) very talented translators (who are more like writers/artists) are still in demand for literary works or highly creative work (i.e., major marketing campaign), so the top 1% translators still have their jobs. Also more niche language pairs for which LLMs aren't trained will be safe.
It will continue to exist as a profession, but diminishing, until it'll eventually be a fraction of what it was 10 or 15 years ago.
(This is specifically translating written documents, not live interpreting which isn't affected by this trend, or at least not much.)
While the general syntax of the language seem to be somewhat correct now, the LLM's still don't know anything about those languages and keep mis-translating words due to its inherit insane design around english. A whole lot of concepts don't even exist in english so these translation oracles just can never do it successfully.
If i i read a few minutes of LLM translated text, there's always a couple of such errors.
I notice younger people don't see these errors because of their worse language skills, and the LLM:s enforce their incorrect understanding.
I don't think this problem will go away as long as we keep pushing this inferior tech, but instead the languages will devolve to "fix" it.
Languages will morph into a 1-to-1 mapping of english and all the cultural nuances will get lost to time.
But they still often get things completely wrong, especially in high-context languages such as Japanese. There often isn't a way to convey the necessary context in text. For example, a Japanese live-streamer who says "配信来てくれてありがとう" means "Thank you for coming to (watch) the stream", but DeepL will give "Thanks for coming to the delivery." - because 配信 actually does mean "delivery" in ordinary circumstances, and that's just the word they idiomatically use to refer to a stream. No matter how much you add from the transcript before or after that, it doesn't communicate the essential fact that the text is a transcript of a livestream.
(And going the other way around, DeepL will insert a に which is grammatically correct but rarely actually uttered by livestreamers who are speaking informally and colloquially; and if you put "stream" in the English, it will be rendered as a loanword ストリーム which is just not what they actually say. Although I guess it should get credit for realizing that you don't mean a small river, which would be 小川.)
(Also, DeepL comes up with complete incomprehensible nonsense for 歌枠, where even basic dictionaries like Jisho will get it right.)
More obviously, they get pronouns and roles wrong all the time - they can't reliably tell whether someone is saying that "I did X" or "you did X" because that may depend on facts of the natural world outside of the actual text (which wouldn't include more information than the equivalent of "did X"). A human translator for, say, a video game cutscene may be able to fix these mistakes by observing what happened in the cutscene. The LLM cannot; no matter how good its model of how Japanese is spoken, it lacks this input channel.
So much of it is exploratory, deciding how to solve a problem from a high level, in an understandable way that actually helps the person who it’s intended to help and fits within their constraints. Will an LLM one day be able to do all of that? And how much will it cost to compute? These are the questions we don’t know the answer to yet.
Like sex professionals, Gemini and co are made to be impressed and have possitive things to say about programming ideas you propose and find your questions "interesting", "deep", "great" and so.
Being “impressed” is a human feeling that can’t be transcribed to an AI period. An AI can tell you that it’s impressed because it’s been trained to do so, it doesn’t “know” (and by this I’m referring to knowing what the feeling is, not knowing the definition) what it means to be impressed
There is a principle (I forget where I encountered it) that it is not code itself that is valuable, but the knowledge of a specific domain that an engineering team develops as they tackle a project. So code itself is a liability, but the domain knowledge is what is valuable. This makes sense to me and matched my long experience with software projects.
So, if we are entrusting coding to LLMs, how will that value develop? And if we want to use LLMs but at the same time develop the domain acumen, that means we would have to architects things and hand them over to LLMs to implement, thoroughly check what they produce, and generally guide them carefully. In that case they are not saving much time.
Doing the actual thinking is generally not the part I need too much help with. Though it can replace googling info in domains I'm less familiar with. The thing is, I don't trust the results as much and end up needing to verify it anyways. If anything AI has made this harder, since I feel searching the web for authoritative, expert information has become harder as of late.
LLMs are faster, and when the task can be synthetically tested for correctness, and you can build up to it heuristically, humans can't compete. I can't spit out a full game in 5 minutes, can you?
LLMs are also cheaper.
LLMs are also obedient and don't get sick, and don't sleep.
Humans are still better by other criteria. But none of this matters. All disruptions start from the low end, and climb from there. The climbing is rapid and unstoppable.
Maybe LLMs completely trivialize all coding. The potential for this is there
Maybe progress slows to a snails pace, the VC money runs out and companies massively raise prices making it not worth it to use
No one knows. Just sit back and enjoy the ride. Maybe save some money just in case
The other, related question is, are human coders with an LLM better than human coders without an LLM, and by how much?
(habnds made the same point, just before I did.)
Source: https://www.thoughtworks.com/insights/blog/generative-ai/exp...
There will always be a place for really good devs but for average people (most of us are average) I think there will be less and less of a place.
You open your post with "we need to accept" and then end with this
This terrifies me. The idea that AI results in me having "less of a place" in society?
The idea of mass unemployment?
We should be scared
While technically capable of building it on my own, development is not my day job and there are enough dumb parts of the problem my p(success) hand-writing it would have been abysmal.
With rose-tinted glasses on, maybe LLM's exponentially expand the amount of software written and the net societal benefit of technology.
Maybe the way forward would be to invent better "specifiction languages" that are easy enough for humans to use, then let the AI implement the specifciation you come up with.
I memorize really little and tend to spend time on reinventing algorithms or looking them up in documentation. Verifying is easy except the fee cases where the llm produces something really weird. But then fallback to docs or reinventing.
Unless you’re a web dev. Then youre right and will be replaced soon enough. Guess why.
A chunk of the objections indicate people trying to shoehorn in their old way of thinking and working.
I think you have to experiment and develop some new approaches to remove the friction and get the benefit.
What benefit to me?
If I'm 50% more productive using AI that's great for my employer, but what do I get out of it?
I get to continue to be employed? I already had that before AI
So what do I get out of this, exactly?
It’s a tough bar if LLMs have to be post antirez level intelligence :)
99% of professional software developers don’t understand what he said much less can come up with it (or evaluate it like Gemini).
This feels a bit like a humblebrag about how well he can discuss with an LLM compared to others vibecoding.
"Your job will be taken by someone who does more work faster/cheaper than you, regardless of quality" has pretty much always been true
That's why outsourcing happens too
Glad to see the author acknowledges their usefulness and limitations so far.
At this point I think people who don't see the value in AI are willfully pulling the wool over their own eyes.
Think about it and tell me you use it differently.
General LLM model would not be as good as LLM for coding, for this case Google deepmind team maybe has something better than Gemini 2.5 pro
In that regard I am less optimistic.
We have never been good at confronting the follies of management. The Leetcode interview process is idiotic but we go along with it. Ironically LC was one of the first victims of AI, but this is even more of an issue for management that things SWEs solve Leetcodes all day.
Ultimately I believe this is something that will take a cycle for business to figure out by failing. When businesses will figure out that 10 good engineers + AI always beats 5 + AI, it will become table stakes rather than something that replaces people.
Your competitor who didn’t just fire a ton of SWEs? Turns out they can pay for Cursor subscriptions too, and now they are moving faster than you.
I built Brokk to maximize the ability of humans to effectively supervise their AI minions. Not a VS code plugin, we need something new. https://brokk.ai
There is a class of developers who are blindly dumping the output of LLMs into PRs without paying any attention to what they are doing, let alone review the changes. This is contributing to introducing accidental complexity in the form of bolting on convoluted solutions to simple problems and even introducing types in the domain model that make absolutely no sense to anyone who has a passing understanding of the problem domain. Of course they introduce regressions no one would ever do if they wrote things by hand and tested what they wrote.
I know this, because I work with them. It's awful.
These vibecoders force the rest of us to waste eve more time reviewing their PRs. They are huge PRs that touch half the project for even the smallest change, they build and pass automated tests, but they enshitify everything. In fact, the same LLMs used by these vibecoders start to struggle how to handle the project after these PRs are sneaked in.
It's tiring and frustrating.
I apologize for venting. It's just that in this past week I lost count of the number of times I had these vibecoders justifying shit changes going into their PRs as "but Copilot did this change", as if that makes them any good. I mean, a PR to refactor the interface of a service also sneaks in changes to the connection string, and they just push the change?
In the end, I had to go spend a couple hours reading the documentation to understand the matching engine, and the final patch didn't look anything at all like the LLM generated code. Which is what seems to happen all the time, it is wonderful for spewing the boilerplate, the actual problem solving portions its like talking to someone who simply doesn't understand the problem and keeps giving you what it has, rather than what you want.
OTOH, its fantastic for review/etc even though I tend to ignore many of the suggestions. Its like a grammar checker of old, it will point out you need a comma you missed, but half the time the suggestions are wrong.
- LLMs are going to be 100x more valuable than me and make me useless? I don't see it happening. Here's 3 ways I'm still better than them.
Another factor is the capture of market sectors by Big Co. When buyers can only approach some for their products/services, the Big Co can drastically reduce quality and enshittify without hurting the bottom line much. This was the big revelation when Elon gutted Twitter.
And so we are in for interesting times. On the plus side, it is easier than ever to create software and distribute it. Hiring doesn't matter if I can get some product sense and make some shit worth buying.
You know, those who don't care about learning and solving problems, gaining real experience they can use to solve problems even faster in the future, faster than any AI slop.
LLM is as good as the material it is being trained on, the same applies to AI and they are not perfect.
Perplexity AI did assist me in getting into Python from 0 to getting my code test with 94% covered and no vulnerabilities (scanning tools) Google Gemini is dogshit
Trusting blindly into a code generated by LLM/AI is a whole complete beast, and I am seeing developers doing basically copy/paste into company's code. People are using these sources as the truth and not as a complementary tool to improve their productivity.
This stuff is still in its infancy, of course its not perfect
But its already USEFUL and it CAN do a lot of stuff - just not all types of stuff and it still can mess up the stuff that it can do
It's that simple
The point is that overtime it'll get better and better
Reminds me of self driving cars and or even just general automation back in the day - the complaint has always been that a human could do it better and at some point those people just went away because it stopped being true
Another example is automated mail sorting by the post office. The gripe was always humans will always be able to do it better - true, in the meantime the post office reduced the facilities with humans that did this to just one