I've tolerated writing my own code for decades. Sometimes I'm pleased with it. Mostly it's the abstraction standing between me and my idea. I like to build things, the faster the better. As I have the ideas, I like to see them implemented as efficiently and cleanly as possible, to my specifications.
I've embraced working with LLMs. I don't know that it's made me lazier. If anything, it inspires me to start when I feel in a rut. I'll inevitably let the LLM do its thing, and then them being what they are, I will take over and finish the job my way. I seem to be producing more product than I ever have.
I've worked with people and am friends with a few of these types; they think their code and methodologies are sacrosanct. That if the AI moves in there is no place for them. I got into the game for creativity, it's why I'm still here, and I see no reason to select myself for removal from the field. The tools, the syntax, its all just a means to an end.
The industry will change drastically, but you can still enjoy your individual pleasures. And there will be value in unique, one-off and very different pieces that only an artesan can create (though there will now be a vast number of "unique" screen printed tees on the market as well)
The only reason I got suckd into this field was because I enjoyed writing code. What I "tolerated" (professionally) was having to work on other people's code. And LLM code is other people's code.
My general way of working now is, I'll write some of the code in the style I like. I won't trust an LLM to come up with the right design, so I still trust my knowledge and experience to come up with a design which is maintainable and scaleable. But I might just stub out the detail. I'm focusing mostly on the higher level stuff.
Once I've designed the software at a high level, I can point the LLM at this using specific files as context. Maybe some of them have the data structures describing the business logic and a few stubbed out implementations. Then Claude usually does an excellent job at just filling in the blanks.
I've still got to sanity check it. And I still find it doing things which looks like it came right from a junior developer. But I can suggest a better way and it usually gets it right the second or third time. I find it a really productive way of programming.
I don't want to be writing datalayer of my application. It's not fun for me. LLMs handle that for me and lets me focus on what makes my job interesting.
The other thing I've kinda accepted is to just use it or get left behind. You WILL get people who use this and become really productive. It's a tool which enables you to do more. So at some point you've got to suck it up. I just see it as a really impressive code generation tool. It won't replace me, but not using it might.
what's the largest (traffic, revenue) product you've built? quantity >>>> quality of code is a great trade-off for hacking things together but doesn't lend itself to maintainable systems, in my experience.
Have you seen it work to the long term?
I don’t think that’s what LLMs offer, mind you (right now anyway), and I often find the trade offs to not be worth it in retrospect, but it’s hard to know which bucket you’re in ahead of time.
But I’m not a software engineer first, I’m a builder first. For me, using these tools to build things is much better than not using them, and that’s enough.
I find his point to be that there is still a lot of value in understanding what is actually going on.
Our business is one of details and I don't think you can code strictly having an LLM doing everything. It does weird and wrong stuff sometimes. It's still necessary to understand the code.
> "as harsh as it may seem, I would say that they’re trying to work in a field that isn’t for them."
I find this statement problematic for a different reason: we live in a world where minimum wages (if they exist) are lower than living wages & mean wages are significantly lower the point at which well-being indices plateau. In that context calling people out for working in a field that "isn't for them" is inutile - if you can get by in the field then leaving it simply isn't logical.
THAT SAID, I do find the above comment incongruent with reality. If you're in a field that's "not for you" for economic reasons that's cool but making out that it is in fact for you, despite "tolerating" writing code, is a little different.
> I got into the game for creativity
Are you confusing creativity with productivity?
If you're productive that's great; economic imperative, etc. I'm not knocking that as a positive basis. But nothing you describe in your comment would fall under the umbrella of what I consider "creativity".
It's almost always a leaky abstraction, because sometimes you do need to know how the lower layer really works.
Every time this happens, developers who have invested a lot of time and emotional energy in understanding the lower level claim that those who rely on the abstraction are dumber (less curious, less effective, and they write "worse code") than those who have mastered the lower level.
Wouldn't we all be smarter if we stopped relying on third-party libraries and wrote the code ourselves?
Wouldn't we all be smarter if we managed memory manually?
Wouldn't we all be smarter if we wrote all of our code in assembly, and stopped relying on compilers?
Wouldn't we all be smarter if we were wiring our own transistors?
It is educational to learn about lower layers. Often it's required to squeeze out optimal performance. But you don't have to understand lower layers to provide value to your customers, and developers who now find themselves overinvested in low-level knowledge don't want to believe that.
(My favorite use of coding LLMs is to ask them to help me understand code I don't yet understand. Even when it gets the answer wrong, it's often right enough to give me the hints I need to figure it out myself.)
I would consider it as a tool to teach and learn code if used appropriately. However LLMs are bullshit if you ask it to write something, pieces yes, whole code... yeah good luck having it maintain consistency and comprehension of what the end goal is. The reason it works great for reading existing code is that the input results into a context it can refer back to but because LLMs are weighted values it has no way to visualize the final output without significant input.
The point is LLMs may allow developers to write code for problems they may not fully understand at the current level or under the hood.
In a similar way using a high level web framework may allow a developer to work on a problem they don’t fully understand at the current level or under the hood.
There will always be new tools to “make developers faster” usually at a trade off of the developer understanding less of what specifically they’re instructing the computer to do.
Sometimes it’s valuable to dig and better understand, but sometimes not. And always responding to new developer tooling (whether LLMs or Web Frameworks or anything else) by saying they make developers dumber, can be naive.
The original "law of leaky abstractions" talked about how the challenge with abstractions is that when they break you now have to develop a mental model of what they were hiding from you in order to fix the problem.
(Absolutely classic Joel Spolsky essay from 22 years ago which still feels relevant today: https://www.joelonsoftware.com/2002/11/11/the-law-of-leaky-a... )
Having LLMs write code for you has a similar effect: the moment you run into problems, you're going to have to make sure you deeply understand exactly what they have done.
I'm finding that, If I don't have solid mastery of at least one aspect of generated code, I won't know that I have problems until they touch a domain I understand.
You can generally trust that transistors won't randomly malfunction and give you wrong results. You can generally trust that your compiler won't generate the wrong assembly, or that your interpreter won't interpret your code incorrectly. You can generally trust that your language's automatic memory management won't corrupt memory. It might be useful to understand how those layers work anyway, but it's usually not a hard requirement.
But once you reach a certain level of abstraction (usually 1 level above programming language), you'll start running into more and more issues resulting from abstraction leaks that require understanding the layer below to properly fix. Probably the most blatant example of this nowadays are "React developers" who don't know JS/CSS/HTML and WILL constantly be running into issues that they can't properly solve as a result, and are forced to either give up or write the most deranged workarounds imaginable that consist of hundreds of lines of unintelligible spaghetti.
AI is the highest level of abstraction so far, and as a result, it's also the leakiest abstraction so far. You CANNOT write proper functional and maintainable code using an LLM without having at least a decent understanding of what it's outputting, unless you're writing baby's first todo app or something.
I want to frame this. I am sick to death of every other website and application needing a gig of RAM and making my damn phone hot in my hand.
Llms currently have tiny holes and we don't know if we can fix them. Established abstractions are more like cups that may leak but only in certain conditions (when it's hot)
This is the weakest point which breaks your whole argument.
I see it happening ALL the time: newer web developers enter the field from an angle of high abstraction, whenever these abstractions don't work well, then they are completelly unable to proceed. They wouldn't be in that place if they knew the low-level and it DOES prevent them from delivering "value" to their customers.
What is even worse than that, since these developers don't understand exactly why some problen manifests, and the don't even understand exactly what their abstraction trully solves, they wrongly proceed to solve a problem using the wrong (high level) tools.
That has some amount to do with the level abstraction but almost everything to do with inexperience. The lower level you get, the harder the impact of inexperience.
New web developers are still sorting themselves out and they are at a stage where they’ll suck no matter what the level of abstraction.
It’s not the same as the other times. The naysayers might be the same elitists as the last time. But that’s irrelevant because the moment is different.
It’s not even an abstraction. An abstraction of what? It’s English/Farsi/etc. text input which gets translated into something that no one can vouch for. What does that abstract?
You say that they can learn about the lower layers. But what’s the skill transfer from the prompt engineering to the programming?
People who program in memory-managed languages are programming. There’s no paradigm shift when they start doing manual memory management. It’s more things to manage. That’s it.
People who write spreadsheet logic are programming.
But what are prompt engineers doing? ... I guess they are hoping for the best. Optimism is what they have in common with programming.
Agree, specially useful when you join a new company and you have to navegate a large codebase (or bad-maintained codebase, which is even worse by several orders of magnitude). I had no luck asking LLM to fix this or that, but it did mostly OK when I asked how it works and what the code is trying to code (it includes mistakes but that's fine, I can see them, which is different if it was just code that I copy and paste).
…but I haven’t joined a new company since LLMs were a thing. How often is this use case necessary to justify $Ts in investment?
Criticisms of each change are not somehow invalid just because the change is inevitable, like all the changes before it.
But do LLMs provide a higher level of abstraction? Is this really one of those transition points in computing history?
If they do, it's a different kind to compilers, third-party APIs or any other form of higher level abstraction we've seen so far. It allows programmers to focus on a different level of detail to some extent but they still need to be able to assemble the "right enough" pieces into a meaningful whole.
Personally, I don't see this as a higher level of abstraction. I can't offload the cognitive load of understanding, just the work of constructing and typing out the solution. I can't fully trust the output and I can't really assemble the input without some knowledge of what I'm putting together.
LLMs might speed up development and lower the bar for developing complex applications but I don't think they raise the problem-solving task to one focused solely on the problem domain. That would be the point where you no longer need to know about the lower layers.
Oddly enough, using an AI assistant, despite it guessing incorrectly as often as it did, helped me learn and write code faster!
If you are writing SQL, it’s helpful to understand how database engines manage storage and optimize queries. If you write database engine code, it’s helpful to understand (among many other things of course) how memory is managed by the operating system. If you write OS code, it’s helpful to understand how memory hardware works. And so on. But you can write great SQL without knowing much of anything about memory hardware.
The reverse is also true in that it’s good to know what is going on one level above you as well.
Anyway my experience has been that knowledge of the adjacent stack layers is highly beneficial and I don’t think exaggerated.
I won’t waste my time too much reacting to this nonsensical comment but I’ll just give this example, LLMs can hallucinate, where they generate code that’s not real, LLMs don’t work off straight rules, they’re influenced by a seed. Normal abstraction layers aren’t.
I dearly hope you’re arguing in bad faith, otherwise you are really deluded with either programming terms or reality.
Don't confuse low-level tedium with CS basics, if you're arguing that knowing how computers work is not relevant to working as a SWE then sure, but why would a company want a software dev that doesn't seem to know software? Usually your irreplaceable value as a developer is knowing and mitigating the leaks so they don't wind up threatening the business.
This is where the industry most suffers from not having a standardized-ish hierarchy, you're right that most shops don't need a trauma surgeon on-call for treating headaches but there's still many medical options before resorting to random grifter who simply "watched some Grey's Anatomy" as "medschool was a barrier for providing value to customers".
When the LLM heinously gets it wrong 2, 3, 4 times in a row, I feel a genuine rage bubbling that I wouldn't get otherwise. It's exhausting. I expect within the next year or two this will get a lot easier and the UX better, but I'm not seeing how. Maybe I lack vision.
Maybe it’s the fact that you know you could do it better in less time that drives the frustration. For a junior dev, perhaps that frustration is worth it because there’s a perception that the AI is still more likely to be saving them time?
I’m only tolerating this because of the potential for long term improvement. If it just stayed like it is now, I wouldn’t touch it again. Or I’d find something else to do with my time, because it turns an enjoyable profession into a stressful agonizing experience.
The last two years have brought staggering progress.
In an experiment (six months long, twice repeated, so a one-year study), we gave business students ChatGPT and a data science task to solve that they did not have the background for (develop a sentiment analysis classifier for German-language recommendations of medical practices). With their electronic "AI" helper, they could find a solution, but the scary thing is they did not acquire any knowledge on the way, as exist interviews clearly demonstrated.
As a friend commented, "these language models should never have been made available to the general public", only to researchers.
That feels to me like a dystopian timeline that we've only very narrowly avoided.
It wouldn't just have been researchers: it would have been researchers and the wealthy.
I'm so relieved that most human beings with access to an internet-connected device have the ability to try this stuff and work to understand what it can and cannot do themselves.
- it puts focus on syntax instead of the big picture. Instead of finding articles or posts on Stack explaining things beyond how to write them. AI give them the "how" so they don't think of the "why"
- students almost don't ask questions anymore. Why would they when an AI give them code?
- AI output contains notions, syntax and API not seen in class, adding to the confusion
Even the best students have a difficult time answering basic questions about what have been seen on the last (3 hours) class.
It’s the college’s responsbility now to teach students how to harness the power of LLMs effectively. They can’t keep their heads in the sand forever.
And then eventually overall we learned what the limits of Wikipedia are. We know that it’s generally a pretty good resource for high level information and it’s more accurate for some things than for others. It’s still definitely a problem that Wikipedia can confidently publish unverified information (IIRC wasn’t the Scottish translation famously hilariously wrong and mostly written by an editor with no experience with the language?)
And yet, I think if these days people were publishing think pieces about how Wikipedia is ruining the ability of students to learn, or advocating that people shouldn’t ever use Wikipedia to learn something, we’d largely consider them crackpots, or at the very least out of touch.
I think AI tools are going to follow the same trajectory. Eventually we’ll gain enough cultural knowledge of their strengths and weaknesses to apply them properly and in the end they’ll be another valuable asset in our ever growing lists of tools.
at the same time in my own life, there are tasks that I don't want to do, and certainly don't want to learn anything about, yet have to do.
For example, figuring out a weird edge case combination of flags for a badly designed LaTeX library that I will only ever have to use once. I could try to read the documentation and understand it, but this would take a long time. And, even if it would take no time at all, I literally would prefer not to have this knowledge wasting neurons in my brain.
If there were a large number of people who didn't quite understand what it meant for a function to be continuous, let alone smooth, who were using such a calculator, I think you'd see similar issues to the ones that are identified with LLM usage: a large number of students wouldn't learn how to compute definite or indefinite integrals, and likely wouldn't have an intuitive understanding of smoothness or continuity either.
I think we don't see these problems with calculators because the "entry-level" ones don't have support for calculus-related functionality, and because people aren't taught how to arrange the problems that you need calculus to solve until after they've given some amount of calculus-related intuition. These conditions obviously aren't the case for LLMs.
What do you think is the big difference between these tools and *outsourcing*?
AI is far more comparable to delegating work to *people*.
Calculators and compilers are deterministic. Using them doesn't change the nature of your work.AI, depending on how you use it, gives you a different role. So take that as a clue: if you are less interested in building things and more interested into getting results, maybe a product management role would be a better fit.
Calculators for the most part don't solve novel problems. They automate repetitive basic operations which are well-defined and have very few special cases. Your calculator isn't going to do your algebra for you, it's going to give you more time to focus on the algebraic principles instead of material you should have retained from elementary school. Algebra and calculus classes are primarily concerned with symbolic manipulation, once the problem is solved symbolically coming to a numerical answer is time-consuming and uninteresting.
Of course, if you have access to the calculator throughout elementary school then you're never going to learn the basics and that's why schoolchildren don't get to use calculators until the tail-end of middle school. At least that's how it worked in the early 2000s when i was a kid; from what i understand kids today get to use their phones and even laptops in class so maybe i'm wrong here.
Previously I stated that calculators are allowed in later stages of education because they only automate the more basic tasks; Matlab can arguably be considered a calculator which does automate complicated tasks and even when i was growing up the higher-end TI-89 series was available which actually could solve algebra and even simple forms of calculus problems symbolically; we weren't allowed access to these when i was in high school because we wouldn't learn the material if there was a computer to do it for us.
So anyways, my point (which is halfway an agreement with the OP and halfway an agreement with you) is that AI and calculators are fundamentally the same. It needs to be a tool to enhance productivity, not a crutch to compensate for your own inadequacies[1]. This is already well-understood in the case of calculators, and it needs to be well-understood in the case of AI.
[1] actually now that i think of it, there is an interesting possibility of AI being able to give mentally-impaired people an opportunity to do jobs they might never be capable of unassisted, but anybody who doesn't have a significant intellectual disability needs to be wary of over-dependence on machines.
Calculators don't pretend to think or solve a class of problems. They are pure execution. The comparison in tech is probably compilers, not code.
A junior developer was tasked with writing a script that would produce a list of branches that haven't been touched for a while. I've got the review request. The big chunk of it was written in awk -- even though many awk scripts are one-liners, they don't have to be -- and that chunk was kinda impressive, making some clever use of associative arrays, auto-vivification, and more pretty advanced awk stuff. In fact, it was actually longer than any awk that I have ever written.
When I asked them, "where did you learn awk?", they were taken by surprise -- "where did I learn what?"
Turns out they just fed the task definition to some LLM and copied the answer to the pull request.
I've been using that as my own personal policy for AI-assisted code and I am finding it works well for me, but would it work as a company company policy thing?
The developer in question has been later promoted to a team lead, and (among other things) this explains why it's "my previous place" :)
Then if you ask for some detail on a call, it's all uhm, ehm, ehhh, "I will send example later".
I'd argue that our ability to recall individual moments has gone down, but the sum of what we functionally know has gone up massively.
I'm sure somebody out there would argue that the answer is yes, but personally I have my doubts.
But sometimes the new tech is a hot x-ray foot measuring machine.
I think this is a mistake. Building things and figuring out how stuff works is not related to pressing buttons on a keyboard to form blocks of code. Typing is just a side effect of the technology used. It's like saying that in order to be a mathematician, you have to enjoy writing equations on a whiteboard, or to be a doctor you must really love filling out EHR forms.
In engineering, coming up with a solution that fits the constraints and requirements is typically the end goal, and the best measure of skill I'm aware of. Certainly it's the one that really matters the most in practice. When it is valuable to type everything by hand, then a good engineer should type it by hand. On the other hand, if the best use of your time is to import a third-party library, do that. If the best solution is to create a code base so large no single human brain can understand it all, then you'd better do that. If the easiest path to the solution is to offload some of the coding to an LLM, that's what you should do.
I've been experiencing this for 10-15 years. I type something and then wait for IDE to complete function names, class methods etc. From this perspective, LLM won't hurt too much because I'm already dumb enough.
As I get older, it feels like every time I have to use a GUI I get stuck in a sort of daze because my mind has become optimized for the specific work I usually do at the expense of the work I usually don't do. I feel like I'm smarter and faster than I've ever been at any prior point in my life, but only for a limited class of work and anything outside of that turns me into a senile old man. This often manifests in me getting distracted by youtube, windows solitaire, etc because it's almost painful to try to remember how to move the mouse around though all these stupid menus with a million poorly-documented buttons that all have misleading labels.
But it appears I'm in a better position because I don't have to work with clearly stupid GUIs and have no strong emotions to them.
I don't generally ask it to write my code for me because that's the fun part of the job.
I’m responsible for a couple legacy projects with medium sized codebases, and my experience with any kind of maintenance activities has been terrible. New code is great, but asking for fixes, refactoring, or understanding the code base has had an essentially 2% success rate for me.
Then you have to wonder, how the hell orgs expect to maintain/scale and more code from fewer devs, who don’t even understand how the original code worked?
LLMs are just a tool but overreliance on them is just as much of a code smell as - say - deciding your entire backend is going to be in Matlab; or all your variables are going to be global variables - you can do it, but I guarantee that it’s going to cause issues 2-3 years down the line.
How have you been using LLMs to help understand existing code?
I have been finding that to work extremely well, but mainly with the longer context models like Google Gemini.
On the other hand, I am finding LLMs increasingly useful as a moderate expert on a large swath of subjects available 24/7, who will never get tired of repeated clarifications, tangents, and questions, and who can act as an assistant to go off and research or digest things for you. It’s mostly decent rubber duck.
That being said, it’s so easy to land in the echo chamber bullshit zone, and hitting the wall where human intuition, curiosity, ingenuity, and personality would normally take hold for even a below average person is jarring, deflating, and sometimes counterproductive, especially when you hit the context window.
I’m fine with having it as another tool in the box, but I rather do the work myself and collaborate with actual people.
It's just especially poignant/painful because developers are being hoisted by their own petard, so to speak.
I'm leaning more engineering to maybe pivot.
In other words, I treat it exactly like stochastic autocomplete. It makes me lazier, I’m sure, but the first part of the article above is a rant against a tautology: any tool worth using ought to be missed by the user if they stopped using it!
I use LLMs to write entire test functions, but I also have specs for it to work from and can go over what it wrote and verify it. I never blindly go "yeah this test is good" after it generates it.
I think that's the middle ground, knowing where, and when it can handle a full function / impl vs a single/multi(short) line auto-completion.
I've been programming for over 25 years, and the joy I get from it is the artistry of it, I see beauty in systems constructed in the abstract realm. But LLM based development remove much of that. I haven't used nor desire to use LLM for this, but I don't want to compete with people that do because I won't win in the short-term nature of corporate performance based culture. And so I'm now searching for careers that will be more resistant to LLM based workflows. Unfortunately in my opinion this pretty much rules out any knowledge based economy.
1. Skilled people do a good job, AI does a not-so-good job.
2. AI users get dumbed down so they can't do any better. Mediocrity normalized.
3. Replace the AI users with AI.
Kinda sorta. Little hobbyist projects can do better on relatively small and focused stuff, where the passion of a couple part-time people can cut through the crap that would be created by a thousand half-ass mythical man-month producers.
However, for a lot of stuff, only a "billion dollar company" (or other organization) can do it. Stuff with a lot of unavoidable but unrewarding scut work (e.g. web scraper maintenance) or stuff that requires more focus and effort than a couple of passionate part-time people can provide.
Also, it'll be interesting to see how LLM prompt writing develops as a skill unto itself.
I do see a time where I could use copilot or some LLM solution but only for making stuff I understand, or to sandbox high level concepts of code approaches. Given that I'm a graphic designer by trade, I like 'productivity/automation' AI tools and I see my approach to code will be the same - I like that they're there but I'm not ready for them yet.
I've heard people say I'll get left behind if I don't use AI, and that's fine as I'll just use niche applications of code alongside my regular work as it's just not stimulating to have AI fill in knowledge blanks and outsource my reasoning.
The problem is that the LLM won't find design mistakes. E.g. trying to get the value of a label in Textual, you can technically do it, but you're not really suppose to. The variable starts with an underscore, so that's an indication that you shouldn't really touch it. The LLMs will happily help you attempt to use a non-existing .text attribute, then start running circles, because what you're doing is a design mistake.
LLMs a probably fairly helpful for situations where the documentation is lacking, but simple auto-complete is also working well enough.
That said, the author is probably right that it has made me dumber or at least less prolific at writing boilerplate.
It's a good thing tbh. Language syntax is ultimately entirely arbitrary and is the most pointless thing to have to keep in mind. Why bother focusing on that when you can use the mental effort on the actual logic instead?
This has been a problem for me for years before LLMs, constantly switching languages and forgetting what exact specifics I need to use because everyone thinks their super special way of writing the same exact thing is best and standards are avoided like the plague. Why do we need two hundred ways of writing a fuckin for loop?
1. The problem domain is a marketing site (low risk)
2. I got tired of fixing bad LLM code
I have noticed the people who do this are caught up in the politics at work and not really interested in writing code.
I have no desire to be a code janitor.
It's a cartoon mentality. Real products have more requirements than any human can fathom, correctness is just one of the uncountable tradeoffs you can make. Understanding, or some kind of scientific value is another.
If anything but a single minded focus on your pet requirement is dumb, then call me dumb idc. Why YOU got into software development is not why anyone else did.
I heard about the term 'vibe coding' recently, which really just means copying and pasting code from an AI without checking it. It's interesting that that's a thing, I wonder how widespread it is.
Conversely: Some people want to insist that writing code 10x slower is the right way to do things, that horses were always better, more dependable than cares, and that nobody would want to step into one of those flying monstrosities. And they may also find that they are no longer in the right field.
The truth is it depends on every detail. What technology. For who. When.
If the code has to be correct, then this is right
One of the jobs of a software engineer is to be the point person for some pieces of technology. The responsible person in the chain. If you let AI do all of your job, it’s the same as letting a junior employee do all of your job: Eventually the higher-ups will notice and wonder why they need you.
I found this to be such a silly statement. I find arguments generated by AI to significantly more solid than this.
I was an engineer before moving to more product and strategy oriented roles, and I work on side projects with assistance from Copilot and Roo Code. I find that the skills that I developed as a manager (like writing clear reqs, reviewing code, helping balance tool selection tradeoffs, researching prior art, intuiting when to dive deep into a component and when to keep it abstract, designing system architectures, identifying long-term-bad ideas that initially seem like good ideas, and pushing toward a unified vision of the future) are sometimes more useful for interacting with AI devtools than my engineering skillset.
I think giving someone an AI coding assistant is pretty bad for having them develop coding skills, but pretty good for having them develop "working with an AI assistant" skills. Ultimately, if the result is that AI-assisted programmers can ship products faster without sacrificing sustainability (i.e. you can't have your codebase collapse under the weight of AI-generated code that nobody understands), then I think there will be space in the future for both AI-power users who can go fast as well as conventional engineers who can go deep.
For example, in one of my recent blog posts I wanted to use Python's Pillow to composite five images: one consisting of the left half of the image, the other four in quadrants (https://github.com/minimaxir/mtg-embeddings/blob/main/mtg_re...). I know how to do that in PIL (have to manually specify the coordinates and resize images) but it is annoying and prone to human error and I can never remember what corner is the origin in PIL-land.
Meanwhile I asked Claude 3.5 Sonnet this:
Write Python code using the Pillow library to compose 5 images into a single image:
1. The left half consists of one image.
2. The right half consists of the remaining 4 images, equally sized with one quadrant each
And it got the PIL code mostly correct, except it tried to load the images from a file path which wasn't desired, but it is both an easy fix and my fault since I didn't specify that.Point (c) above is also why I despise the "vibe coding" meme because I believe it's intentionally misleading, since identifying code and functional requirement issues is an implicit requisite skill that is intentionally ignored in hype as it goes against the novelty of "an AI actually did all of this without much human intervention."
And no data or link to data either. Just a waves hand "I think it happened to me"
I don't need to remember all functions and their signatures for APIs I rarely use - it's fine if a tool like IntelliSense (or an ol' LSP really) acts like a handy cheat sheet for these. Having a machine auto-implement entire programs or significant parts of them, is on another level entirely.
Watch the whole thing, it's hilarious. Eventually these venture capitalists are forced to acknowledge that LLM-dependent developers do not develop an understanding and hit a ceiling. They call it "good enough".
The use of LLMs for constructive activities (writing, coding, etc.) rapidly produces a profound dependence. Try turning it off for a day or two, you're hobbled, incapacitated. Competition in the workplace forces us down this road to being utterly dependent. Human intellect atrophies through disuse. More discussion of this effect, empirical observations: https://www.youtube.com/watch?v=cQNyYx2fZXw
To understand the reality of LLM code generators in practice, Primeagen and Casey Muratori carefully review the output of a state-of-the-art LLM code generator. They provide a task well-represented in the LLM's training data, so development should be easy. The task is presented as a cumulative series of modifications to a codebase: https://www.youtube.com/watch?v=NW6PhVdq9R8
This is the reality of what's happening: iterative development converging on subtly or grossly incorrect, overcomplicated, unmaintainable code, with the LLM increasingly unable to make progress. And the human, where does he end up?
"Spell checkers are making people dumb"
"Wikipedia is making people dumb"
Nothing to see here.
Wrong quote - "calculators are making people bad at doing maths" was the fear. Truns out, they didn't, but didn't help either [1]
> "Spell checkers are making people dumb"
Well, at this point I assume you use "dumb" as a general stand-in for "worse at the skill in question". here, however, research shows that indeed, spell checkers and auto-correct seem to have a negative influence on learning proper spelling and grammar [2]. The main takeaway here seems to be the fact that handwriting in particular is a major contributor in learning and practicing written language skills. [2]
> "Wikipedia is making people dumb"
Honestly, haven't heard that one before. Did you just make that up? Apart from people like Plato, thousands of years ago, owning and using books, encyclopaedias, and dictionaries has generally been viewed as a sign of a cultured and knowledgeable individual in many cultures... I don't see how an online source is any different in that regard.
The decline of problem solving and analytical skills, short attention spans, lack of foundational knowledge and subsequent loss of valuable training material for our beloved stochastical parrots, though, might prove to become a problem in future.
There's a qualitative difference between relying on spell checkers while still knowing the words and slowly losing the ability to formulate, express, and solve problems in an analytical fashion. Worst case we're moving towards E.M. Forster's dystopian "The Machine Stops"-scenario.
[1] https://www.jstor.org/stable/42802150?seq=1#page_scan_tab_co...
[2] https://www.researchgate.net/publication/362696154_The_Effec...
Why would one work without one?
There will be many such cases of engineers losing their edge.
There will be many cases of engineers skillfully wielding LLMs and growing as a result.
There will be many cases of hobbyists becoming empowered to build new things.
There will be many cases of SWEs getting lazy and building up huge, messy, intractable code bases.
I enjoy reading from all these perspectives. I am tired of sweeping statements like "AI is Making Developers Dumb."
Back then you'd giggle about how silly that person was, you wouldn't forget your card would you? Somewhere since then the mindset shifted and if a machine would allow for this to happen everybody would agree the designers of the machine did not do a good job on the user-experience.
This is just a silly example, but through everyday life everything has become streamlined and you can just cruise through a day on auto-pilot and machines will autocorrect you or the process how to use them makes it near impossible to get into a anomalous state. Sometimes I do have the feeling all this made us 'dumber' and I don't actively think anymore when interfacing with things because I assume it's foolproof.
However, not having to actively think about every little thing when interfacing with systems does give a lot of free mental capacity to be used for other things.
When reading these things I always get the feeling it's simply a "kids these days" piece. Go back 40 years when hardly anybody would use punch cards anymore. I'd imagine there were a lot of "real" developers who advocated that "kids" are wasting CPU cycles and memory because they've lost touch with the hardware and if they simply kept using punchcards they'd get a sense of "real" programming again.
My takeaway is, if we expect our ATMs to behave sane and keep us from doing dumb things, why wouldn't we expect at least a subset of developers wanting to get that same experience during development?
The bit about “people don’t really know how things work anymore”: my friend I grew up programming in assembly, I’ve modified the kernel on games consoles. Nobody around me knocking out their C# and their typescript has any idea how these things work. Like I can name the people in the campus that do.
LLMs are a useful tool. Learn to use them to increase your productivity or be left behind.
But I still like to do certain things by hand. Both because it's more enjoyable that way, and because it's good to stay in shape.
Coding is similar to me. 80% of coding is pretty brain dead — boilerplate, repetitive. Then there's that 20% that really matters. Either because it requires real creativity, or intentionality.
Look for the 80/20 rule and find those spots where you can keep yourself sharp.
The dumb developers are those resisting this amazing tool and trend.
Any technology that renders a mental skill obsolete will undergo this treatment. We should be smart enough to recognize the rhetoric it is rather than pretend it's a valid argument for Luddism.
The process - instead of typing code, I mostly just talked (voice commands) to an AI coding assistant - in this case, Claude Sonnet 3.7 with GitHub Copilot in Visual Studio Code and the macOS built-in Dictation app. After each change, I’d check if it was implemented correctly and if it looked good in the app. I’d review the code to see if there are any mistakes. If I want any changes, I will ask AI to fix it and again review the code. The code is open source and available in GitHub [2].
On one hand, it was amazing to see how quickly the ideas in my head were turning into real code. Yes reviewing the code take time, but it is far less than if I were to write all that code myself. On the other hand, it was eye-opening to realize that I need to be diligent about reviewing the code written by AI and ensuring that my code is secure, performant and architecturally stable. There were a few occasions when AI wouldn't realize there is a mistake (at one time, a compile error) and I had to tell it to fix it.
No doubt that AI assisted programming is changing how we build software. It gives you a pretty good starting point, it will take you almost 70-80% there. But a production grade application at scale requires a lot more work on architecture, system design, database, observability and end to end integration.
So I believe we developers need to adapt and understand these concepts deeply. We’ll need to be good at:
- Reading code - Understanding, verifying and correcting the code written by AI
- Systems thinking - understand the big picture and how different components interact with each other
- Guiding the AI system - giving clear instructions about what you want it to do
- Architecture and optimization - Ensuring the underlying structure is solid and performance is good
- Understand the programming language - without this, we wouldn't know when AI makes a mistake
- Designing good experiences - As coding gets easier, it becomes more important and easier to build user-friendly experiences
Without this knowledge, apps built purely through AI prompting will likely be sub-optimal, slow, and hard to maintain. This is an opportunity for us to sharpen the skills and a call to action to adapt to the new reality.[0] https://en.wikipedia.org/wiki/Vibe_coding