There's no equivalent mandate for software engineers. Nothing stops you from spending years as a pure "prompt pilot" and losing the ability to read a stack trace or reason about algorithmic complexity. The atrophy is silent and gradual.
The author's suggestion to write code by hand as an educational exercise is right but will be ignored by most, because the feedback loop for skill atrophy is so delayed. You won't notice you've lost the skill until you're debugging something the agent made a mess of, under pressure, with no fallback.
My biggest lessons were from hours of pain and toil, scouring the internet. When I finally found the solution, the dopamine hit ensured that lesson was burned into my neurons. There is no such dopamine hit with LLMs. You vaguely try to understand what it’s been doing for the last five minutes and try to steer it back on course. There is no strife.
I’m only 24 and I think my career would be on a very different path if the LLMs of today were available just five years ago.
Does this mean youd be incapable of learning anything? Or could you possibly learn way more because you had the innate desire to learn and understand along with the best tool possible to do it?
Its the same thing here. How you use LLMs is all up to your mindset. Throughly review and ask questions on what it did, or why, ask if we could have done it some other way instead. Hell ask it just the questions you need and do it yourself, or dont use it at all. I was working on C++ for example with a heavy use of mutexs, shared and weak pointers which I havent done before. LLM fixed a race condition, and I got to ask it precisely what the issue was, to draw a diagram showing what was happening in this exact scenario before and after.
I feel like Im learning more because I am doing way more high level things now, and spending way less time on the stuff I already know or dont care to know (non fundementals, like syntax and even libraries/frameworks). For example, I don't really give a fuck about being an expert in Spring Security. I care about how authentication works as a principal, what methods would be best for what, etc but do I want to spend 3 hours trying to debug the nuances of configuring the Spring security library for a small project I dont care about?
Yes. This strikes me as obvious. People don't have the sort of impulse control you're implying by default, it has to be learnt just like anything else. This sort of environment would make you an idiot if it's all you've ever known.
You might as well be saying that you can just explain to children why they should eat their vegetables and rely on them to be rational actors.
Saying what you said about it being down to being how you use LLM comes from a privileged position. You likely already know how to code. You likely know how to troubleshoot. Would you develop those same skillsets today starting from zero?
This is just the same concern whenever a new technology appears.
* Socrates argued that writing would weaken memory, that it would create only superficially knowledge but incapable of really understanding. But it didn't destroy it. It allowed to store information and share it with many others far away.
* The internet and web indexers made information instantly accessible, allowing you to search for the information you just need, the fear is that people would just copy from the internet, yet researching information became way faster, any one with Internet access could access this information and learn themselves, just look at the amount of educational websites with courses to learn.
Each time a new technology came and people feared that it could degrade knowledge, the tools only helped us to increase our knowledge.
Just like with books and the internet, people could simply copy and not learn anything, its not exclusive to LLMs. The issue isn't in the tool itself, but how we use it. The new generation will probably instead of learning how to search, they will need to learn how to prompt, ask and evaluate whether the LLM isn't hallucinating or not.
LLMs making you dumber is far from being "disproven" by science. Quite the opposite https://arxiv.org/abs/2506.08872
The internet never fell. I bet it’ll be the same with AI. You will never not have AI.
The big difference is the internet was a liberation movement: Everything became open. And free. AI is the opposite: By design, everything is closed.
Kids today couldn't imagine how people used to live just 100 years ago, like it was the dark ages. People from that age would probably look at kids 10 years ago and think, these poor children! They don't know how to work hard! They don't know anything about life! They're glued to these bizarre light machines! Every age is different.
It should be government's job to make it as easy as possible for people to retrain, switch jobs and start new careers. Obviously taxation should be reworked too, if AI and robots replace lots of jobs in some sectors. Profits produced by efficiency gains shouldn't be concentrated just among few billionaires.
Honestly, I don't really know what to do. I spent my whole life (so far; I'm still very young) falling in love with programming, and now I just don't find this agent thing fun at all. But I just don't know how to find my niche if using LLMs truly does end up being the only way for me to build valuable things with my only skills.
It's pretty depressing and very scary. But I appreciate this article for at least conveying that so effectively...
I'm genuinely curious, I feel very differently and excited about this agent thing.
Asking because unlike a lot of other commentary, this struck me as being more about the act itself than being depressed/anxious for financial reasons, etc
I got into programming because the act of spinning a web of code just feels like what I'm designed to do.
Vibe coding definitely has some of that, but it feels so detached from any understanding of the computer itself. I feel like I'm bossing someone around — and I would never want to be a non-coding manager. I'm curious, how/why do you feel so different?
(Obviously the financial side is stressful too, but I feel like I'm in a good spot to figure that out either way.)
I remember as the internet took off and you could just search for things, I thought it made programming too easy. You never had to actually learn how it worked, you can just search for the specific answer and someone else would do the hard work of figuring out how to use the tools available for your particular type of problem.
Over the years, my feelings shifted, and I loved how the internet allowed me to accomplish so much more than I could have trying to figure it all out from books.
I wonder if AI will feel similar.
For programming, I don't like it. It's like a master carpenter building furniture from IKEA. Sure it's faster and he doesn't have to think very hard and the end result is acceptable but he feels lazy and after a while he feels like he is losing his skills.
The best days of computing for me were what you remember. A computer was just a blank slate. You turned it on, and had a ">" blinking on the screen. If you wanted it to do anything you had to write a program. And learning how to do that meant practice and study and reading... there were no shortcuts. It was challenging and frustrating and fun.
I never had the feeling that being able to search for things on the internet made things too easy. For me it felt like a natural extension to books for self-learning, it was just faster.
LLMs feel entirely different to me, and that's where I do get the sense that they make things "too easy" in that (like the author of the OP blog post) I no longer feel like I am building any sort of skill when using them other than code review (which is not a new skill as it is something I have previously done with code produced by other humans for a long time).
As with the OP author I also think that "prompting" as a skill is hugely overblown. "Prompting" was maybe a bit more of a skill a year ago, but I find that you don't really have to get too detailed with current LLMs, you just have to be a bit careful not to bias them in negative ways. Whatever value I have now as a software developer has more to do with having veto power in the instances where the LLM agent goes off the rails than it does in constructing prompts.
So for now I'm stuck in a situation where I feel like for work I am being paid to do I basically have to use LLMs because not doing so is effectively malpractice at this point (because there are real efficiency gains), but for selfish reasons if I could push a button to erase the existence of LLMs, I'd probably do it.
I think this depends on how you are using the internet. Looking up an API or official documentation is one thing, but asking for direct help on a specific problem via Stackoverflow seems different.
I've seen the following prediction by a few people and am starting to agree with it: software development (and possibly most knowledge work) will become like farming. A relatively smaller number of people will do with large machines what previously took armies of people. There will always be some people exploring the cutting edge of thought, and feeding their insights into the machine, just how I image there are biochemists and soil biology experts who produce knowledge to inform decisions made by the people running large farming operations.
I imagine this will lead to profound shifts in the world that we can hardly predict. If we don't blow ourselves up, perhaps space exploration and colonization will become possible.
You could do that without that knowledge back in the day too, we had languages that were higher level than assembler for forever.
It's just that the range of knowledge needed to maximize machine usage is far smaller now. Before you had to know how to write a ton of optimizations, nowadays you have to know how to write your code so the compiler have easy job optimizing it.
Before you had to manage the memory accesses, nowadays making sure you're not jumping actross memory too much and being aware how cache works is enough
Fewer abstractions, deeper understanding, fewer dependencies on others. These concepts show up over and over and not just in software. It's about safety.
Same here. Except that as native french speaker there simply weren't that many quality books about programming/computers that I could easily find in french.
So at 11 years old I also learned english, by myself, by using computers (which were in english back then) and by reading computer books.
And we'd exchange tips with other kids in the neighborhood who also had computers and were also learning to code (like my neighbors who eventually, 20 years later, created a software startup in SoCal).
Of course, asking a question was another matter, likely to result in a rebuke for violating the group's arcane decorum. But given how pervasive "RTFM" culture was back then, most "n00bs" were content to do just that (RTFM) until they came up against something that genuinely wasn't covered in some FAQ or manpage.
It seems to imply a great deal of pessimism about human self-determination. Like, I can't be anything good unless there is an external mold pressing me into the good shape. And it can't be my choice because I would never choose anything good. I'll only do good things for myself if forced.
Since AI is supposedly taking everybody's jobs and making it so we can choose never to better ourselves, maybe future governments will need to institute taskmasters to force us into regimens of physical and mental health and vigor. A whole new adult school system will have to be instituted.
* "search on steroids" - get me to the thing I need or ask whether the thing I need exists, give me few examples and I can get it running.
* getting the trivial and uninteresting parts out of the way, like writing some helper function for stuff I'm doing now, I'll just call AI, let it do its thing and continue writing the code in meantime, look back ,check if it makes sense and use it.
So I'm not really cheating myself out of the learning process, just outsource the parts I know well enough that I can check for correctness but save time writing
This is what I am still grappling with. Agents make more productive, but also probably worse at my job.
How is this any different than building Ikea furniture? If I build my "Minska" cupboard using the step-by-step manual, did I learn something profound?
But the nice thing about a cupboard and its components is that they are real objects, so the remembrance is done with the whole body (like the feeling of a screw not correctly inserted). Software development is 90% a mental activity.
That said, I think you're still leaning things building IKEA-style software. The first time I learned how to program, I learned from a book and I tried things out by copying listings from the book by hand into files on my computer and executing them. Essentially, it was programming-by-IKEA-manual, but it was valuable because I was trying things out with my own hands, even if I didn't fully understand every time why I needed the code I'd been told to write.
From there I graduated to fiddling with those examples and making changes to make it do what I wanted, not what the book said. And over time I figured out how to write entirely new things, and so on and so forth. But the first step required following very simple instructions.
The analogy isn't perfect, because my goal with IKEA furniture is usually not to learn how to build furniture, but to get a finished product. So I learn a little bit about using tools, but not a huge amount. Whereas when typing in that code as a kid, my goal was learning, and the finished product was basically useless outside of that.
The author's example there feels like a bit of both worlds. The task requires more independent thought than an IKEA manual, so they need to learn and understand more. But the end goal is still practical.
A few days back, I tried to implement a PDF reader by pure vibe coding. I used all my free Antigravity, Cursor, and Co-pilot tokens to create a half-baked, but working Next.js PDF-reader that (to be honest) I wouldn't have glued together without 2 weeks of work. As an MLE, I have done negligible web development using JavaScript and have mostly worked with Python and C.
But the struggle actually started after the free tokens were exhausted. I was feeling anxious to even look into those Next.js files. I am not able to describe, but it was probably some kind of fear - fear of either not being able to debug/implement a new feature, or not willing to put in precious hours (precious because of FOMO that I could do something cool with AI-paired vibe coding) to understand and build the feature myself.
I abandoned that project since that day. Haven't opened it yet - partly because I am waiting for the renewal of free tokens.
BTW - my coworker is not AI. It is a flesh-and-bones SWE.
> I do read the code, but reviewing code is very different from producing it, and surely teaches you less. If you don’t believe this, I doubt you work in software.
I work in software and for single line I write I read hundredths of them.If I am fixing bugs in my own (mostly self-education) programs, I read my program several times, over and over again. If writing programs taught me something, it is how to read programs most effectively. And also how to write programs to be most effectively read.
I'm not sure whether this should humble or confuse me. I am definitely WAY heavier on the write-side of this equation. I love programming. And writing. I love them both so much that I wrote a book about programming. But I don't like reading other peoples' code. Nor reading generally. I can't read faster than I can talk. I envy those who can. So, reading code has always been a pain. That said, I love little clever golf-y code, nuggets of perl or bitwise magic. But whole reams of code? Hundreds upon hundreds of lines? Gosh no. But I respect anyone who has that patience. FWIW I find that one can still gain incredibly rich understanding without having to read too heavily by finding the implied contracts/interfaces and then writing up a bunch of assertions to see if you're right, TDD style.
Not that I had an opportunity to write new code, but most of my work through my experience was either to fix bugs or to add new functionality to an existing system with as little code as possible. Both goals mean reuse and understanding of the existing code. For both "reuse" and "understanding" you have to thoroughly read existing code a dozen or so times over.
Tests (in TDD) can show you presence of bugs, not the absence of them. For the absence of bugs one has to thoroughly know problem domain and source code solving the problems.
I think here lies the difference OP is talking about. You are reading your own code, which means you had to first put in the effort to write it. If you use LLMs, you are reading code you didn't write.
Gemini 3 by itself is insufficient. I often find myself tracing through things or testing during runtime to understand how things behave. Claude Opus is not much better for this.
On the other hand, pairing with Gemini 3 feels like pairing with other people. No one is going to get everything right all the time. I might ask Gemini to construct gcloud commands or look things up for me, but we’re trying to figure things out together.
Man, it would rule so much if programmers were literate and knew how to actually communicate what they intend to say.
It's ironic that the more ignorant one is the one calling another ignorant.
Alright I've had my fun with the name-calling. I will now explain the stunningly obvious. Not a thing anyone should have to for someone so sharp as yourself but there we are...
For someone to produce that text after growing up in an English speaking environment, they would indeed be comically inept communicators. Which is why the more reasonable assumption is that English is not in fact their native language.
Not merely the more generous assumption. Being generous by default would be a better character trait than not, but still arguably a luxury. But also simply the more reasonable assumption by plain numbers and reasoning. So, not only were you a douche, you had to go out of your way to select a less likely possibility to make the douche you wanted to be fit the situation.
Literate programmers indeed.
Yes. Absolutely. To what end, though? Is your end deterministic like a cryptographic protocol or loose like pagination of a web page? Is your end feature delivery or 30 years of rock solid service delivery at minimal cost?
AI is a dangerous tool. It exposes fundamental questions by automating away the mundane. We have had the luxury of not thinking deep and hard about intent and value creation/capture and system architecture. AI is putting us face to face with our ineptitude: maybe it wasn’t the tech stack or the programmers or the whatnots? Maybe the idea was shait, maybe I had no understanding of the value added of my product? Maybe …?
You get the best gear - musical instrument, bicycle, camera, etc - the pros have and still the results are not great. Gotta ask why. We are experiencing this at literally industrial scale.
Maybe he meant "reviewing code from coding agents"? Reviewing code from other humans is often a great way to learn.
I learn the most from struggling through a problem, and reading someone’s code doesn’t teach me all the wrong ways they attempted before it looked like the way it now does.
I agree that if I don't already know how to implement something, seeing a solution before trying it myself is not great, that's like skipping the homework exercises and copying straight from the answer books.
These steps are what help you solve other issues in the future.
Code reviews are a cinch because if you get confused by the real code you can switch to reviewing the tests and vice versa.
But I don’t think the answer here is to double down on reading the code and understanding that deeply. We’re rapidly moving past this.
I think the answer is to review the code for very obvious bad choices. But then it’s about proper validation. Check out the app, run the flows, use it for real. Does it _actually_ function?
Or that’s what is working for me. I cannot review all the LOC and I’m starting to feel like I don’t want.
Why would they be here to stay? The crux of the author's argument is that using them is detrimental in the long term. The correct response to that is not a lukewarm response of "maybe do some coding now and again", it is "don't use tools that make you worse".
[...] since I work at an AI lab and stand to gain a great deal if AI follows through on its economic promise.
And there it is.If the LLM is indeed such a master at complex coding tasks that we don't understand, why not ask it some questions about how the code works?
You can even ask directly about the concern. "I am worried that by letting you do everything I am not learning how the system works. Could you tell me more about what you did and how I might think through it if I needed to do it myself?"
https://www.gophergolfer.com/iphone
NextJS, Rails, GraphQL, React Native
Certainly wasn't one-shot for all of it but case in point it has dozens and dozens of "features" all LLM implemented
We're looking at the twilight of programming as a human skill. The LLMs are just that good.
Like somebody else said, there is still a need for QA (and usually for requirements gathering too), that's a part of the development cycle. Developing software that is meant to be used by humans with zero humans involved isn't realistic.
The act of designing software might be changing, less writing the actual code, but someone who knows what the fuck they're doing still has to guide the ship.