It isn't much different to how it works with a team. You have an architecture who understands the broader landscape, you have developers who implement certain subsystems, you have a testing strategy, you have communication, teaching, management. The only difference now is that I can do all this with my team being LLMs/agents, while I focus on the leadership stuff: docs, designs, tests, direction, vision.
I do miss coding, but it just isn't worth it anymore.
That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.
This reminds me of people who watch tens of video courses about programming, but can't code anything when it comes to a real job. They have an illusion of understanding how to code.
For AI companies, that's a good thing. People's skills can atrophy to the point that they can't code without LLMs.
I would suggest practicing it from time to time. It helps with code review and keeping the codebase at a decent level. We just can't afford to vibecode important software.
LLMs produce average code, and when you see it all day long, you get used to it. After getting used to it, you start to merge bad code because suddenly it looks good to you.
Can you get any fadd-ier than that? Of course they love AI.
I cant imagine how it is for people tha try to manually write after years of heavy llm usage
This is what it means to understand something. It's like P Vs NP. I don't need to find the solution, I just need to be able to verify _a_ solution.
This will without a doubt become a problem if the whole AI thing somehow collapses or becomes very expensive!
But it’s probably the correct adaptation if not.
YMMV, but I'm not seeing this at all. You might get foggy around things like the particular syntax for some advanced features, but I'll never forget what a for loop is, how binary search works, or how to analyze time complexity. That's just not how human cognition works, assuming you had solid understanding before.
I still do puzzles like Advent of Code or problems from competitive programming from time to time because I don't want to "lose it," but even if you're doing something interesting, a lot of practical programming boils down to the digital equivalent of "file this file into this filing," mind-numbingly boring, forgettable code that still has to be written to a reasonable standard of quality because otherwise everything collapses.
I actually think *more* than I used to, because I only get the hardest problems to solve. I mostly work on architectural documents these days.
If this is the price to pay to unlock this productivity boost, so be it but let’s keep in mind that:
- we need to be more careful not to burnout since our job became de facto harder (if done at the maximum potential);
- we always need to control and have a way to verify what LLMs are doing on the easiest tasks, because even if rarely, they can fail even there (...but we had to do this anyway with Junior devs, or didn’t you?)
So do you review all that code your LLM generates for you?
Imagine somebody writes a blog post "why I bike to work". They detail that they love it, the fresh air, nature experience biking through a forest, yes sometimes it's raining but that's just part of the experience, and they get fit along the way. You respond with "well I take the car, it's just easier". Well, good for you, but not engaging with what they wrote.
This pretty much sums up my current mood with AI. I also like to think, but it just isn't worth it anymore as a SE at bigCorp. Just ask AI to do it and think for you and the result only has to be "good enough" (=> works, passes tests). Makes sense business wise, but it breaks me, personally.
We are trading the long term benefits for truth and correctness for the short term benefits of immediate productivity and money. This is like how some cultures have valued cheating and quick fixes because it's "not worth it" to do things correctly. The damage of this will continue to compound and bubble up.
LLMs have come a long way since ChatGPT 4.
The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.
All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.
Wait.. are we talking about LLMs or humans here?
To what extent is Claude configuring these servers? Is this baremetal deployment with OS configuration and service management? Or is it abstracted by defining Terraform files to use pre-created images offered by a hosting service?
codex
“run my dev server”
My laziness knows no bounds.I only hope that when you do, you don’t take anyone else with you.
It’s one thing to be careless and delete all your own email; quite another to be careless and screw the lives of people using something you worked on and who had no idea you were YOLOing with their data.
Onion articles really write themselves these days. I for one would still rather keep the money and write 25% of it myself.
My mental model is that coding by hand is similar to horseback riding, sail boating, etc. These skills are still enjoyed by people and in some circumstances they are invaluable.
At this point it's worth considering a permanent, pinned "HN Flamewar: Will LLMs turn you into the next Ken Thompson or are you just a poser who can't write code" thread. We're having this same discussion, constantly, on 5 different threads on the frontpage.
Unfortunately people need to experience a 1 million line codebase a dynamic language to figure out that types are actually pretty nice, and they need to write getters and setters for every field for a few years to figure out OOP is stupid, and they need to do 10 HTTP requests for something that could be 10 function calls to figure out microservices are stupid.
In none of these trends did the industry pause to evaluate if what's being written is completely idiotic, it's only with a few decades of hindsight, after a lot of money is lost that we learn the lesson.
I wish I could have a HN frontpage with everything but AI news. Both postive or negative.
There's some tasks where it's pretty clear what you want though and are just boring jobs that are totally not worth speccing well and an LLM will blaze through.
Things like:
- add oauth support to this API
- add a language switcher in this menu, an API endpoint, save it to the UserSettings table
- make a 404 pageThe difference is that with other people, you are training somebody else in your team who will eventually internalize what you taught them and then be able to carry the philosophy forward. Even if it took exactly the same amount of time for you to explain (+ code review etc), it's a clear net benefit in the long run. Not so with an LLM. There it's just lost time.
Explaining the problem to an LLM and having it ask pointed questions is helpful IMHO, as well as being able to iterate fast (output new versions fast).
As an example, I'm currently making simple Windows utilities with the help of AI. Parsing config files in C is something the AI does perfectly. But an interesting part of the process is: what should go into a config file, or not, what are the best defaults, what should not be configurable: questions that don't have a perfect answer and that can only be solved by using each program for weeks, on different machines / in different contexts.
I'd dispute "the whole point" - there's a whole bunch of problems I can understand but would struggle to implement effectively in code (which is another big point - there's little use in a solution that takes, e.g., two months to calculate last week's numbers when your revenue/profit/planning depends on those numbers.)
At a minimum, for me, the difficulties of programming are many stepped: understanding the problem -> converting that understanding to algorithms/whatnot -> implementing that understanding -> making it efficient (if required) -> verifying the solution.
Trying to boil it down to "ONE COOL TRICK!" that justifies vibe-coding is daft.
[There's also a whole bunch of things I can implement but don't really understand (business logic, sales/tax rules, that kind of thing) but that's why we have project managers, domain experts, etc.]
Edit macros and awk+grep solved that.
Have a large LLM-written change set that works but that you’re not sure you fully understand? Make the coding agent quiz you on the design and implementation decisions. This can be a lot more engaging than trying to do a normal code review. And you might even learn something from it. Probably not the same amount as if you did this yourself fully. But that’s just a question of how much effort you want to invest in the understanding?
I love these quotes. I got a much deeper, more elegant understanding of the grammar of a human language as I wrote a phrase generator and parser for it. Writing and refactoring it gave me an understanding of how the grammar works. (And LLMs still confidently fail at really basic tasks I ask them for in this language.)
There is nothing wrong whatsoever with just getting things done.
Grady Booch (co-creator of UML) has this to say about AI: this is a shift of the abstraction of software engineering up a level. It's very similar to when we moved from programming in assembly to structured languages, which abstracted away the machine. Now we're abstracting away the code itself.
That means specs and architectural understanding are now the locus of work - which is exactly what Neil is claiming to be trying to preserve. I mean, yeah you can give that up to the AI as well but then you just get vibecoded garbage with huge security/functionality holes.
The metric became lines of code, although those of us that started off coding as children when MySpace was a thing and goto was the best performing search engine, are well aware that lines of code is the stupidest metric you can come up with. But slop machines produce so much of it, it's easy to see why many people are like "see? see this? it works! And you are gonna be doing this 2 days as a caveman". Gladly, because two data pipelines that do the exact same thing take 4 days to run on slop code, whereas my caveman approach takes single digit hours and does not produce several billion rows of unusable garbage.
Not to mention the countless times when someone has asked he to help them when they are stuck and a simple question such as "where do you define the path to the output directory?" leads to 10 minutes of scrolling on project that contains a total of 10000 lines of code.
The good news for us mortals, is that this is that this approach is starting to bite people back and for the companies that manage to survive the inevitable head on collision, they will have to dig deep in their pockets to get people to clean up the mess.
As the original author pointed out, the advice to jog or ride a bike because driving all the time is bad for your health is sound, but the Red Flag Act has proven to be a foolish endeavor. I believe the same phenomenon will occur.
I wanted to change it from 32-bit MSDOS to 64-bit Linux. But it realized that the segmented memory model cannot be implemented in large memory without massive changes which breaks everything else.
It was willing to construct new program with seemingly same functionality, but the assembly code was so incomprehensible that whole project was useless as a learning tool. And C-version would have been faster already.
Sorry to say, but less talented humans like me-myself are already totally useless in this.
Wait, I thought you said it understood everything..
I just wanted to see what it would look like. Lesson learned.
Im ready to get downvotes again for my takes, but as a person who writes and trains DL models, I will die on the hill: “people need to produce high quality data” it can be code it can be art, but we can’t rely on those models and trust in the things that they provide.
when i do test driven development, all the thinking goes into the tests, and the actual code writes itself. LLMs hardly help make that any faster.
having a complete testsuite may make it easier to use LLMs for refactoring, and adding features, but then you still have to write he tests for the new functionality.
I recommend looking into a subject called "reinforcement learning", the way AI acquired superhuman skills in chess, go, etc.