The project I work on has been steadily growing for years, but the amount of engineers taking care of it stayed same or even declined a bit. Most of features are isolated and left untouched for months unless something comes up.
So far, I managed growing scope by relying on tests more and more. Then I switched to exclusively developing against a simulator. Checking changes with real system become rare and more involved - when you have to check, it's usually the gnarliest parts.
Last year's, I noticed I can no longer answer questions about several features because despite working on those for a couple of months and reviewing PRs, I barely hold the details in my head soon afterwards. And this all even before coding agents penetrated deep into our process.
With agents, I noticed exactly what article talks about. Reviewing PR feels even more implicit, I have to exert deliberate effort because tacit knowledge of context didn't form yet and you have to review more than before - the stuff goes into one ear and out of another. My team mates report similar experience.
Currently, we are trying various approaches to deal with that, it it's still too early to tell. We now commit agent plans alongside code to maybe not lose insights gained during development. Tasks with vague requirements we'd implicitly understand most of previously are now a bottleneck because when you type requirements to an agent for planning immediately surface various issues you'd think of during backlog grooming. Skill MDs are often tacit knowledge dumps we previously kept distributed in less formal ways. Agents are forcing us to up our process game and discipline, real people benefit from that too. As article mentioned, I am looking forward to tools picking some of that slack.
One other thing that surprised me was that my eng manager was seemingly oblivious to my ongoing complains about growing cognitive load and confusion rate. It's as if the concept was alien to them or they could comprehend that other people handle that at different capacity than them.
Engineering managers in my experience (even in ones with deep technical backgrounds) often miss the trees for the forest. The best ones go to bat for you, especially once verifying that they can do something to unblock or support you. But that’s still different than being in the terminal or IDE all day.
Offloading cognitive load is pretty much their entire role.
We desperately need a new set of abstractions for human- and AI-based knowledge.
I prefer humans-as-a-network of abstractions piloting an organic robot perspective. Sans mathematical framework, this is an unsatisfying claim, I know... But just hear me out.
This allows for extreme complexity between individuals and for language to act as a standard serial com channel with high dimensional abstractions embedded across words - a network of abstractions unto itself. Models of this network are embedded in books and 'live' in oral history.
LLMs, then, are just a much better model of the abstraction networks that span people through language (and often thought).
Notice that they're NOT people. And that we are actively developing network science to accommodate the complexities of inherent in examining both the real world and modeled versions of these networks.
As an example, the tools to layer up can be envisioned as more networks on top of these networks: reasoning and cognitive patterns are captured in recursive transformer-based LLMs. So a metacognative model might actively generate LoRA for each prompt.
Again, much math and research needed. But it's been a very useful set of abstractions this far.
Writing things down is important for organisational persistence of information but that is something else.
You will also add a markdown file to the changelog directory named with the current date and time `date -u +"%Y-%m-%dT%H-%M-%SZ"`, record the prompt, and a brief summary of what changes you made, this should be the same summary you gave the developer in the chat.
From that I get the prompt and the summary for each change. It's not perfect but it at least adds some context around the commit.
We've always had the problem that understanding while writing code is easier than understanding code you've written. This is why, in the pre-AI era, Joel Spolsky wrote: "It's harder to read code than to write it."
The worst code bases I have to deal with have either no philosophy or a dozen competing and incompatible philosophies.
The best are (obviously) written in my battle tested and ultra refined philosophy developed over the last ~25 years.
But I'm perfectly happy to be working in code bases written even with philosophies that I violently disagree with. Just as long as the singular (or at least compatible) philosophy has a certain maturity and consistency to it.
I didn't remember every line but I still had a very good grasp of how and why it's put together.
(edit: and no, I don't have some extra good memory)
Other times, I can make a small change to something that doesn't require much time, and once it's tested and committed, I quickly lose any memory of even having done it.
The hard part is to gain familiarity with the project's coding style and high level structure (the "intuition" of where to expect what you're looking for) and this is something that comes back to you with relative ease if you had already put that effort in the past - like a song you used to have memorized in the past, but couldn't recall it now after all these years until you heard the first verse somewhere. And of course, memorizing songs you wrote yourself is much easier, it just kinda happens on its own.
I don’t know if this becomes prod code, but I often feel the need to create like a Jupyter notebook to create a solution step by step to ensure I understand.
Of course I don’t need to understand most silly things in my codebase. But some things I need to reason about carefully.
With llm-first coding, this experience is lost
AI tools don’t prevent people from understanding the code they are producing as it wouldn’t actually take that much time, but there’s a natural tendency to avoid hard work. Of course AI code is generally terrible making the process even more painful, but you where just looking at the context that created it so you have a leg up.
Meanwhile some stuff Claude wrote for me last week I barely remember what it even did at a high level.
I absolutely feel the cognitive debt with our codebase at work now. It’s not so much that we are churning out features faster with ai (although that is certainly happening) - but we are tackling much more complex work that previously we would have said No to.
Anyone pretending gen-ai code is understood as well as pre-gen-ai, handwritten code is totally kidding themselves.
Now, whether the trade off is still worth it is debatable, but that's a different question.
The hope being that if the feature were to be kept or the demo fleshed out, developers would need to shape and refactor the project as per newly discovered requirements, or start from scratch having hopefully learnt from the agentic rush.
To me, it always boils down to LLMs being probabilistic models which can do more of the same that has been done thousands of times, but also exhibit emergent reasoning-like properties that allow them to combine patterns sometimes. It's not actual reasoning, it's a facsimile of reasoning. The bigger the models, the better the RLHF and fine-tuning, the more useful they become but my intuition is that they'll always (LLMs) asymptotically try to approach actual reasoning without being able to get there.
So the notion of no-human-brain-in-the-loop programming is to me, a fool's errand. I do obviously hope I am right here, but we'll see. Ultimately you need accountability and for accountability you need human understanding. Trying to move fast without waiting for comprehension to catch up (which would most likely result in alternate, better approaches to solving the problem at hand) increases entropy and pushes problems further down the road.
For example, handwritten code also tended to be reviewed manually by each other member of the team, so the probability of someone recalling was higher than say, LLM generated code that was also LLM reviewed.
AI spec docs and documentation also have this documentation problem
Claude often makes a hash of our legacy code and then i go look at what we had there before it started and think “i don’t even know what i was thinking, why is this even here?”
The complaint about "code nobody understands" because of accumulating cognitive debt also happened with hand-written code. E.g. some stories:
- from https://devblogs.microsoft.com/oldnewthing/20121218-00/?p=58... : >Two of us tried to debug the program to figure out what was going on, but given that this was code written several years earlier by an outside company, and that nobody at Microsoft ever understood how the code worked (much less still understood it), and that most of the code was completely uncommented, we simply couldn’t figure out why the collision detector was not working. Heck, we couldn’t even find the collision detector! We had several million lines of code still to port, so we couldn’t afford to spend days studying the code trying to figure out what obscure floating point rounding error was causing collision detection to fail. We just made the executive decision right there to drop Pinball from the product.
- and another about the Oracle RDBMS codebase from https://news.ycombinator.com/item?id=18442941
(That hn thread is big and there are more top-level comments that talk about other ball-of-spaghetti projects besides Oracle.)
My prompts are literally "brainstorm next slice" or "brainstorm how to fix this bug" or "talk me through trades offs of approach A Vs B" so those prompts aren't meaningful in their own.
It's quite effective, but I'm a team of one.
(attributed to Martin Fowler but I can't find any solid evidence)
When software engineers become agent herders their day-to-day starts to resemble more that of a manager than that of an engineer.
* thinking about the big picture * knowing how you can verify that the code match the big picture.
In both case, somtimes you are happily surprised, sometimes you discover that the things you told 3 times the one writing code to do was still not done.
This is the most insidious part. It's not even that bad code gets deployed. That can be fixed and hopefully (by definition) the market weeds that out.
The problem is that the market doesn't seem to operate like that, and instead the engineer who cares loses their job because they're not hitting the metrics.
Constraints often result in better results. Think of Duke Nukem Forever and how long it took them to release a nothingburger.
I just watched a show called the Knight of the Seven Kingdoms and the showerunners were given a limited budget compared to their cousin shows and it resulted in a better product.
Sometimes those metrics keep things on the rails
This is a common trope, but in my experience many engineers I met know that's not how a business runs. Dealing with the constraints and weighing them out is one of the essential skills of any engineer. Knowing when a product is just good enough is one of the things that make you senior.
What I don't like is the impossible middle ground where people are asked to 20X their output while taking full responsibility for 100% of the code at the same time. That is the kind of magical thinking that I am certain the market will eventually delete. You have to either give up on comprehension or accept a modest, 20% productivity boost at best.
https://www.benguttmann.com/blog/double-it-or-cut-it-in-half...
Brownfield legacy projects with god classes and millions of lines of code which need to behave coherently across multiple channels- without actually having anything linking them from the written code? That shit is not even gonna get a 20% boost, you'll almost always be quicker on your own - what you do get is a fatigue bonus, by which I mean you'll invest yourself less for the same amount of output, while getting slightly slower because nobody I've ever interacted with is able to keep such code bases in their mind sufficiently to branch out to multiple agents.
On projects that have been architected to be owned by an LLM? Modular modilith with hints linking all channels together etc? Yeah, you're gonna get a massive productivity boost and you also will be using your brain a shitton actually reasoning things out how you'll get the LLM to be able to actually work on the project beyond silly weekends toy project scope (100k-MM LOC)
But let's be real here, most employees are working with codebases like the former.
And I'm still learning how to do the second. While I've significantly improved since I've started one year ago, I wouldn't consider myself a master at it yet. I continue to try things out and frequently try things that I ultimately decide to revert or (best case) discard before merge to main simply because I ... Notice significant impediments modifying/adding features with a given architecture.
Seriously, this is currently bleeding Edge. Things have not even begun to settle yet.
We're way too early for the industry to normalize around llms yet
Every one of us is a pioneer if we choose to be. We have only scratched the surface as an industry.
Now you could say that expectation has to change but I don’t see how—the people paying you expect you to produce working software. And we’ve always been biased in favor of short term shipping over longer term maintainability.
Also, the essay notes that once a "worse" system is established, it can be incrementally improved. Following that argument, we can say that as long as the AI code runs, it creates a footprint. Once the software has users and VC funding, developers can go back and incrementally improve or refactor the AI's mess, to a satisfying degree.
(From GP) "AI coding sometimes sacrifices correctness or cleanness for simplicity, but it will win and win big as long as the produced code works per its users' standards."
Those user's standards are an ephemeral target for any software beyond a one-shot script or a hobby project with minimal user:dev ratio. That incorrect and unclean code simply isn't conducive to the many iterations needed when those "users' standards" change. And as we all know, that change is _inevitable_, and oftentimes happens before the software in question has even had a single release! Get ready to throw ever more tokens at trying to correct and clean if you ever really "win big" and need to actually support the product.
It's very much gross short-sighted thinking that goes right along with the gross short-sighted thinking providing all the [fake] value around this crap.
I have the same feeling when creating my art-works I suffer through the process of creation and learning. While someone makes money with an ai generated art work.
Sometimes I wonder if it matters at all.
I agree with you, but I think you and I are on the wrong website for this mentality.
Once the software has users and VC funding, developers can go back and incrementally improve or refactor the AI's mess, to a satisfying degree.
Or in my case, the AI is going back to refactor some poor human written code.I will fully admit that AI writes better code than me and does it faster.
He's moving so fast that he's not bothering to learn how the system actually works. He just implicitly trusts what the model tells him. I'm trying to get him to do end-to-end manual testing using the system itself (log into the web app in a local or staging environment and go through the actions that the user would go through), he just has the AI generate tests and trusts the output. So he completely misses things that would be clear if you learned the system at a deep level and could see how the individual project you're working on fit in with the larger system
I see this with all the junior engineers on my team. They've never learned how to use a debugger and don't care to learn. They just ask the model. Sometimes they think critically about the system and the best way to do something, but not always. They often aren't looking that critically at the model's output.
The core of the article is “ AI-assisted development potentially short-circuits this replenishment mechanism. If new engineers can generate working modifications without developing deep comprehension, they never form the tacit knowledge that would traditionally accumulate. The organization loses knowledge not just through attrition but through insufficient formation.”
But is it possible this phenomenon is transient?
Isn’t part of the presumed value add of LLM coding agents in the meta-realm around coding; e.g. that well-structured human+LLM generated code (green field in particular) will be organized in such a way that the human will not have to develop deep comprehension until needed (e.g. for bug fix/optimization) and then only for a working set of the code, with the LLM bringing the person up to speed on the working set in question and also providing the architectural context to frame the working set properly?
But to offer a counter argument, would the same thing not have happened with the rise of high level languages? The machine code was abstracted away from engineers and they lost understanding of it, only knowing what the high level code is supposed to do. But that turned out fine. Would llms abstracting the code away so engineers only understand the functionality (specs, tests) also be fine for the same reason? Why didnt cognitive debt rise in with high level languages?
A counter counter argument is that compilers are deterministic so understanding the procedure of the high level language meant you understood the procedure that mattered of the machine code, and the stuff abstracted away wasnt necessary to the codes operation. But llms are probabilistic so understanding the functionality does not mean understanding the procedure of the code in the ways that matters. But id love to hear other peoples thoughts on that
Any argument that attempts to frame LLMs as analogous to compilers is too flawed to bother pursuing. It's not that compilers are deterministic (an LLM can also be deterministic if you have control over the seed), it's that the compiler as a translator from a high level language to machine code is a deductive logical process, whereas an LLM is inherently inductive rather than deductive. That's not to say that LLMs can't be useful as a way of generating high level code that is then fed into a compiler (an inductive process as a pipeline into a deductive process), but these are fundamentally different sorts of things, in the same way that math is fundamentally different from music (despite the fact that you can apply math to music in plenty of ways).
- deterministic agents, where the model guarantees the same output with a seed
- much faster coding agents, which will allow us to "compile" or "execute" natural language without noticing the llm
- maybe just running the whole thing locally so privacy and reliability are not an issue
We're not there yet, but once we have that then I agree there won't be too much of a difference between using a high level language and plain text.There's going to be a massive shift in programming education though, because knowing an actual programming language won't matter any more than knowing assembly does today.
The purpose of high level languages is to make the structure of the code and data structures more explicit so it better captures the “actual” program model, which is in the mind of the programmer. Structured programming, type systems, modules, etc. are there to provide solid abstractions in which to express that model.
None of that applies to giving an LLM a feature idea in English and letting it run. (Though all of it is helpful for keeping an LLM from going completely off the rails.)
The last part is what matters. There's no such clear rules in LLMs behavior. Yes you can get to behave roughly like a rule, but there's no clear cut demarcation between what's in and what's not.
It did not turn out fine. Fortunately no one took it seriously, and at least seniors still have an intuitive model of how the hardware works in their head. You don't have to "see" the whole assembly language when writing high level code, just know enough about how it goes at lower levels that you don't shoot yourself in the foot.
When that's missing, due to lack of knowledge or perhaps time constraints, you end up on accidentally quadratic or they name a CVE after you.
0/1s → assembly → C → high-level languages → frameworks → AI → product
The engineer keeps moving up the abstraction chain with less and less understanding of the layers below. The better solution would be creating better verification, testing, and determinism at the AI layer. Surely we'll see the equivalent of high-level languages and frameworks for AI soon.
I wrote a SaaS project over the weekend. I was amazed at how fast Claude implemented features. 1 sentence turned into a TDD that looked right to me and features worked
but now 3 weeks later I only have the outlines of how it works and regaining the context on the system sounds painful
In projects I hand wrote I could probably still locate major files and recall system architectures after years being away
"The right amount of AI is not zero. And it’s not maximum."
Very disturbing as I thought my technical skills would help me clarify the global picture. And that is exactly the contrary that is happening.
https://ionanalytics.com/wp-content/uploads/2026/02/The_Wron...
I really like the article. It’s not trying to sell fear (which does sell); it doesn’t paint the leaderships as clueless. Nobody knows what is going to happen in the future. The article might be wrong on a few things. But it doesn’t matter. It points out a few assumptions that people might be missing and that is great.
This never held.
As somebody who has inherited codebases thrown over the wall through acquisitions and re-orgs, there is absolutely nothing in this article related to "code generated by AI" that cannot be attributed to "code generated by humans who are no longer at the company." Heck, these have happened when revisiting code I myself wrote years ago.
In a previous life 10 years ago, there was one large Python codebase I inherited from an acquisition, where a bug occurred due a method argument sometimes being passed in as a string or a number. Despite spending hours reproducing it multiple times to debug it, I could never figure out the code path that caused the bug. I suspect it was due to some dynamic magic where a function name was generated via concatenating disparate strings each of which were propagated via multiple asynchronous message queues (making the debugger useless), and then "eval"d. After multiple hours of trial and error and grepping, I could never find the offending callsite and the original authors had long moved on. My fix was just to put in a "x = int(x)" in the function and move on.
I would bet this was due to a shortcut somebody took under time pressure, something you can totally avoid simply by having the AI refactor everything instead.
We know what the solutions for that are, and they're the same -- in fact, they should be the default mode -- for AI-generated code. They are basically everything that we consider "best practices": avoiding magic, better types, comprehensive tests, documentation, modularity, and so on.
Yes I am aware this means my job is gone.
Now that we have coding assistants and so-called AI, 'software developers' are prompting code that far exceeds their abilities.
The piper will need to be paid, one way or another.
Editing a one shot on the otherhand reminds me of trying to mod a Wordpress plugin.
- Document the purpose
- Document the research
- Document the design
- Document the architecture
- Document the plans
- Document the implementation
Also put in documentation that summarizes the important things so that you understand broadly the why and how, and where to look for more detailed information.
This documentation not only makes your agent consume less tokens, it also makes it easier for YOU to keep your head above water!
The only annoying thing is that the AI will often forget to update docs, but as long as you remember to tell it to update things from time to time, it won't drift too far. Regular hygiene is key.
I thought when you vibe it, you're supposed to keep doing that forever.
If you need an explanation, ask the clanker.
This all sounds like the classic path I've seen low quality coders take, coding themselves into a corner until changes effectively become impossible.
For real people, that's when the coder finds a new job, often a promotion off the back of their dreadful architectural decisions, or if it's an agency, abandons the client.
I wonder if it will follow the same failure states, has anyone caught it making multiple versions of the same function yet? With slightly different bugs in them?
Also, as always, a highly modular codebase is very important. If I only have to reason about a single module then I don’t have to have full context on system.
It seems we’re now in a world where engineers are responsible for creating a good environment where an agent is able to gain context on the architecture and validate its work via tests (e2e, unit, smoke, etc). Then it can get into its own feedback loop and find the correct solution on its own much faster.
Part of me feels like we could have increased both velocity and comprehension a great amount twenty years ago already if we'd only had the same considerations for our fellow developers.
This complexity to understanding compression will be a big market going forward.
> This gap between output velocity and comprehension velocity is cognitive debt.
I have felt that lack of absorption during the last months, adding doomscroolling to the equation, I have felt how my thinking is disappearing.
I tried to speculatively expand that idea in this post
> A species that cannot follow the reasoning of its own systems does not supervise them; it simply inhabits them until they stop working.
I feel this idea is closely related to additive bias. People are scared of breaking things, so the safest way is to just add another tiny part to an already complex system. As cognitive debt accumulates faster, this additive bias just becomes stronger imo.
I don't let it edit code, but I do have it guide me. Writing the code myself forces me to think about it, question it in isolation and tie it to the overall design.
I don't always do so, sometimes I do let it do the edits for simpler smaller changes, but I do at any new feature.
Maybe it's because I work in such a small team on a still-starting project, but even with the chaos of LLM-generated code, I can't imagine such a case as above that the LLMs couldn't also address.
Great read though and I appreciated the article.
I have no clue what my compiler is emitting every time I hit F5. I don't need to comprehend IL or ASM because I have a ~deterministic way to produce this output from a stable representation.
Writing a codebase as natural language is definitely feasible, but how we're going about it right now is not going to support this. A vast majority of LLM coding is coming out of ad-hoc human in the loop or stochastic agent swarms. If we want to avoid the comprehension gap we need something closer to a compiler & linker that operates over a bucket of version-controlled natural language documents.
Also, you can ask the coding agent for help at understanding it, unlike the old days when whoever wrote it is long gone.
You could also ask it to write bits of code to do experiments to figure out what the code really does. Then you could reproduce the same experiment.
These things are pretty similar to what a human might do to reverse-engineer a program. Some skills might atrophy a bit, but the idea that this makes you helpless is a fallacy.
But what is more broadly true is that as we adopt new technologies we depend on them more and more, and eventually start removing the backups. I'm old enough to remember when people didn't have Internet and saw Internet service gradually change from a luxury to a necessity. Eventually people cancel their landlines. Eventually, you can't get a new landline even if you wanted to.
Which means fixes can go in faster than it would require to first grok it
What’s missing in literally every single one of these conversations is testing
Literally all you have to do is implement test driven development and you solve like 99.9% of these issues
Even if you don’t go fully TDD which I’m not a fan of necessarily having an extensive testing suite the covers edge cases is necessary no matter what you do but it’s a need to have in a case where your code velocity is high
This is true for a company full of juniors pumping out code like early days of Facebook let’s say which allowed for their mono repot to grow insanely but it took major factors every few years but it didn’t really matter because they had their resources to do it
When you need to implement something yourself, you have to make decisions when faced with the reality of turning ideas into code.
An AI agent sometimes surfaces these; and sometimes it just makes a choice.
The risk is tests just embed these decisions as policy in code, without there have been proper consideration.
Often there's a core ambiguity in a conception somewhere, and because of the limited context of an AI, it can implement things one way and then another for the next feature, without actually hitting the inconsistency.
So a poor specification is perfectly technically implemented won’t actually do what the intended goal is because the user has not correctly specified the goal and so ultimately it will fail at the highest level task if poorly specified
But that won’t necessarily reveal itself in code it would reveal itself in failure for other people to adopt or the user let’s say to adopt that tool for their workflow
That absorption only takes place in the mind of that individual, unfortunately. That doesn't help when they no longer work there or are on vacation.
The ideal situation is the solo open source project. You wrote all 200K lines of code yourself, and will maintain them until death. :)
It's like how we might not know how sewing is done but we know how to put instructions in a loom to produce it. I also agree it is still important to read that code and understand how it works, may be take a moment to see what is happening but we are learning something entirely different here.
We shouldnt be giving it up just for some mild convenience (which seems so far overall really mild). The gain simply doesnt match the long term loss.
BOOKSTORES: How to Read More Books in the Golden Age of Content
After AI, that understanding often disappears, to the point where we can't even direct the AI to fix the problem because we don't know what's wrong.
Also AI often changes the code in the context of current problem. So, we might get more bugs when fixing one.
Now, it can take only a few days or weeks.
Genuine question: so what?
First of all, team members leave all the time, and you're stuck staring at code nobody instantly understands.
Second of all, LLM's are a godsend in help you understand how existing code works. Just give it the files and ask it to explain to you what the components do and how they interact. It'll give you a high-level summary and then you can interactively dig in, far faster than has ever been possible before.
Heck, I often don't remember anything about code I wrote six months ago. It might as well have been written by someone else. And that's not an original observation either -- I remember hearing the same thing from other developers decades ago, as justification for writing better code comments.
Modern codebases are often far too large for any one person or even an entire team to fully comprehend at once. The team has cycled through generations of team members, with nobody who can remember the original rationales for design decisions.
LLM's are helping comprehension more than ever. I don't understand why people aren't talking about this more.
This just isn't true at all in my experience. Do I remember every detail of code I haven't looked at for six months? No, but I can go back and recall pretty quickly how it's structured and find my way around. I'm much more able to do that with code I wrote and thought deeply about. It's like riding a bicycle - if you invested in building up your knowledge once, you can bring it back more easily.
LLMs can sometimes help you to understand someone else's code but they can also hallucinate and I think people gloss over how frequently this happens. If no one actually understands or can verify what it's saying, all I can say is good luck.
These are more important than ever, because we don't have the crutch of "Teammate x wrote this and they are intimately familiar with it" which previously let us paper over bad abstractions and messy code.
This is felt more viscerally today because some people (especially at smaller/newer companies) have never had to work this way, and because AI gives us more opportunity to ignore it
Like it or not, the most important part of our jobs is now reviewing code, not writing it. And "shelfed" ideas will now look like unmerged PRs instead of unwritten code
Please email us (hn@ycombinator.com) to communicate with the mods. We don't get alerted to mentions of usernames and we don't get even close to seeing every comment, especially after a thread has gone from the front page.
This article shows flaws with AI driven development.
One could argue with its stance, but I took it as a given (the equation for cognitive debt touches on science).
It feels entirely logical to view LLMs/coding agents as an almost final step in the short-term focus the overall system has been thriving on.
We are all supposed to be advancing through these levels. Moving at a pace where you actually understand the system you're responsible for is now considered a performance issue. But also, we're "still held responsible for quality".
Needless to say I'm dusting off my resume, but I'm sure plenty of other companies are following the same playbook.
EDIT: fixed a few mistakes
Nicely put
I propose a new paradigm: programmer experience, PX.
So, code generated by AI ideally would follow the rules of PX. Whatever those may turn out to be.
- legacy guys - super 10x guys who say no to u all the time - students - even more legacy - open source
I got to a point where I honestly care so little about all these guys' damn architectural decisions, which to me - a practitioner, scientist, researcher and academics teacher - made similarly very little sense.
Really, top coders, and veteran Java enterprise copy-pasters, I care so little about your damn code, it is very wrong most of times. I care very little about architectural decisions most of the opensource people took, as they very often come from weird backgrounds and these decisions do not match mine. Needless to say - they often know their architectural decisions are already wrong 10 years later (a great example is the QGIS crowd in this regard). I don't care about somebody's greatly designed ProC code. Neither do I care if Twitter was doing 1000 of API calls, which it seems to have been doing in reality, as even though I despise the Elon guy - well, his new X is arguably faster and more stable.
I don't care about how great your docker scales, if you need to scale to 1m VMs and back again, there is a fair chance you're Google, so I don't care about you either, as you are not the good guys anymore.
Likewise, I very much would bet 99% of visitors here don't really care what architectural decisions YC took when they decided to showcase Algolia's search. Very little interest in this.
The whole idea that there is a right way to do architecture or code is in total and direct contradiction with the history of computing, which has a good record of many successful projects not having great architecture (MySpace for example) and great projects that did not fly, even though they were top notch.
What I care is about is people and what people they are. Are they fakers? Are they smart? Are they in love with their code, or they simply see it as a tool. Are they smart enough to make a step back. Are they calm enough, are they inspiring. And of course - am I getting paid to do it.
So this massive outcry is super misplaced, and you know what - I don't care if you created your code with Claude or by threading it one char at a time, because eventually it's going to be me, with close to little knowledge, that will be forced to untangle this wonderful mess of yours.
And, no, you cannot teach people how to code. You can show them the way, and they learn their approach to it. Leave 5 people alone in 5 rooms, you'll get 5 architectures, perhaps all of them very solid.
I am currently doing the OMSCS at Georgia Tech and taking Machine Learning (7641) which has always had a reputation for being difficult. I don't mind a challenge, but I feel that the AI policy creates a sense of permanent and unpayable cognitive debt and learning deficits.
The class has traditionally taken a "data-first approach" to ML, where instead of focusing on the details of the different algorithms, students must apply them to datasets and analyze their performance and trade-offs. There are four colossal end-to-end ML projects which culminate in an 8-page IEEE-style paper each. (I actually prefer this general direction rather than an algo-heavy one - I find it more valuable to my work in business applications.)
For their AI policy, they've decided that all code can be generated by AI - the only rule is that the paper contents must be original analysis. To avoid taking any risks, I do not even use spell-checking AIs on the paper.
However, it seems to me that to compensate for the AI help, they've cranked up the amount of ground that needs to be covered in the projects. In the first project we were given two datasets, six algos to test, and a bunch of params and metrics to experiment with, producing a real combinatorial explosion of stuff to work on. This is on top of up to around 150+ pages of scientific reading on some weeks.
I am leaning very heavily on LLMs to generate massive chunks of the code, but I feel like I can't keep up at all.
I don't even feel my skills coming in where poor. I am a confident programmer, recently brushed up on math, and this is actually my second CS degree and my fourth course at Georgia Tech. I am rather familiar with the feeling of difficult courses or work problems pushing me to my intellectual limits where I stare into the abyss, but this feels radically different.
I am pushed to work at a higher (less detailed) level of abstraction, as many have foretold LLMs would do. I feel like I am learning about the data science meta-process but cannot keep up with details that are not even that fine. There is some complex math in there that could probably make my head spin but I cannot even get to that - I am cognitively stuck at higher abstractions like keeping up with so many families of algos, datasets, APIs, and thousands of AI generated codes.
In some sense this may be a shape of things to come at work too, but here's where that analogy breaks down: the performance of our work doesn't matter and we're not even graded on it. As long as we convincingly explain why things happen, we should be good, but even as I start to get the class and focus on that, I feel like I can barely keep up. If only they had made a bit of room with the AI productivity increase to focus a bit longer on that!
I thought I was losing it but this morning I found a Reddit thread with dozens of current students venting and found some solace in seeing that I'm not alone.
I also feel for the teaching staff, who I think are absolutely well-meaning, competent and attentive, but who just like the rest of us are trying to wing it in this brave new world.
AI is transformative for the good and the bad, and it's going to take us all many years to sort it out. We're not even started understanding social media and AI could be orders of magnitude more complex and also further complicating the former.
Can we get rules against this or something at this point? It's every other post.