I've tried Cursor and Claude Code and have seen them both do some impressive things, but using them really sucks the joy out of programming for me. I like the process of thinking about and implementing stuff without them. I enjoy actually typing the code out myself and feel like that helps me to hold a better mental model of how stuff works in my head. And when I have used LLMs, I've felt uncomfortable about the distance they put between me and the code, like they get in the way of deeper understanding.
So I continue to work on my projects the old-fashioned way, just me and vim, hacking stuff at my own pace. Is anyone else like this? Am I a dinosaur? And is there some trick for the mental model problem with LLMs?
Annecdotally, what we've found was that those using AI assistants show superficial improvements in productivity early, but they learn at a much slower rate and their understanding of the systems is fuzzy. It leads to lots of problems down the road. Senior folks are also susceptible to these effects, but at a lower level. We think it's because most of their experiences are old fashioned "natty" coding.
In a way, I think programmers need to do natty coding to train their brains before augmenting/amputating it with AI.
LLMs are here to stay. Banning them in your organization is like banning IDEs, because, you know, real programmers use plain text editors and print statements.
Yes, junior programmers will take a bit longer to learn. But assuming they will always rely on LLMs is a bit dismissive. I grew up in Eastern Europe, without internet, and basically without TV either. All I had was books, and I read lots of it. When I came to America, I saw that nobody around me had read anything close to how many books I had read, and I felt a bit smug. But I got over it: I realized that people's brains still mature even if their knowledge consumption comes in form of movies, or the internet, or in the more modern days, TikTok, or LLMs. Yes, maybe being able to read Umberto Eco novels will always be beyond the reach of the TikTok-generation, but then reading Cervantes or Cicero in the original was always beyond my reach. I'm still living a fulfilling life even without first-hand knowledge of the classics, it's entirely possible the LLM generation could become decent programmers without internalizing Kernighan and Ritchie.
Your analogy with book reading is very interesting, although I interpret it differently. It seems like you enjoyed reading longform books, but the environment you moved to (US) is not reading-heavy (much more visual, like TV/phones). The skills you developed was not as valued in this new environment. The issue is the skillset-environment mismatch. If you had moved to a reading-rich culture/community, you'd have appreciated your past reading experiences.
In software engineering, I think the skillset is more like longform writing, where you have to build the mental model of the story and also be able to dig down to individual words. The more experience you have building these models from scratch and learning from other good builders, the better off you will be. People can certainly get by and "coast" on just using outputs from LLMs, the same way that there will be many LLM storywriters. But I'm concerned it'll put a ceiling on what they can accomplish. They are not developing the skillset needed at a higher level. They're stuck in-distribution, and never venture out. They may not even know what "out" is.
I guess some programmers are OK with that. And some orgs may be perfectly fine with LLM-based engineering (ie. Think of how many dysfunctional engineering teams there are. is adding LLMs that much worse?). They are willing to risk the tradeoffs. But they may later discover that it's a shrinking pool with a lot of newcomers, and to advance their craft and profession, they may have to write some code from scratch and read Kernighan and Ritchie.
Makes perfect sense to me to keep juniors far away from that stuff.
My own experience with LLM-based coding has been wasted hours of reading incorrect code for junior-dev-grade tasks, despite multiple rounds of "this is syntactically incorrect, you cannot do this, please re-evaluate based on this information" "Yes you are right, I have re-evaluated it based on your feedback" only to do the same thing again. My time would have been better spent either 1) doing this largely boilerplate task myself, or 2) assigning and mentoring a junior dev to do it, as they would only have required maybe one round of iteration.
Based on my experience with other abstraction technologies like ORMs, I look forward to my systems being absolutely flooded with nonperformant garbage merged by people who don't understand either what they are doing, or what they are asking to be done.
Less skilled and more productive can both be true
I'm looking into alternatives because I have zero interest in having LLM tools dictated to me because some MBA exec is sold on the hype
I find it impossible to get into flow with the autocomplete interrupting me constantly and the code they generate in the chat node sucks
I lead a team building Markhub, an AI-native workspace, and we have this debate internally all the time. Our conclusion is that there are two types of "thinking" in programming:
"Architectural Thinking": This is the joy you're talking about. The deep, satisfying process of designing systems, building mental models, and solving a core problem. This is the creative work, and an AI getting in the way of this feels terrible. We agree that this part should be protected.
"Translational Thinking": This is the boring, repetitive work. Turning a clear idea into boilerplate code, writing repetitive test cases, summarizing a long thread of feedback into a list of tasks, or refactoring code. This is the work we want to delegate.
Our philosophy is that AI should not replace Architectural Thinking; it should eliminate Translational Thinking so that we have more time for the joyful, deep work.
For your mental model problem, our solution has been to use our AI, MAKi, not to write the core logic, but to summarize the context around the logic. For example, after a long discussion about a new feature, I ask MAKi to "summarize this conversation and extract the action items." The AI handles the "what," freeing me up to focus on the "how."
You are not a dinosaur. You are protecting the part of the work that matters most.
I've tried new things occasionally, and I keep going back to a text editor and a shell window to run something like Make. It's probably not the most efficient process, but it works for everything and there's value in that. I have no interest in a tool that will generate lots of code for me that may or may not be correct and I'll have to go through with a fine tooth comb to see; I can personally generate lots of code that may or may not be correct, and if that fails, I have run some projects as copy-paste snippets from stack overflow until it works; it's not my idea of a good time, but I think it was better than spending the time to understand the many layers of OSX when all I wanted to do was get a pixel value from a point on the screen into applescript and I didn't want to do any other OSX ever (and I haven't).
I work with grad students who write a lot of code to analyze data. There is an obvious divide in comprehension between those who genuinely write their own programs vs those who use LLMs for bulk code generation. Whether that is correlation or causation is of course debatable.
In one sense, blindly copying from an LLM is just the new version of blindly copying from stack overflow and forum posts, and it seems to about be the same fraction of people either way. There isn't much harm in reproducing boilerplate that's already searchable online, but in that situation it puts orders of magnitude less carbon in the atmosphere to just search for it traditionally.
For the philosophical insights into ethics... we may turn to fiction =3
I agree with you, 100%. I like typing out code by hand. I like referring to the Python docs and I like the feeling of slowly putting code together and figuring out the building blocks, one by one. In my mind, AI is about efficiency for the sake of efficiency, not for the sake of enjoyment, and I enjoy programming,
Furthermore, I think AI embodies the model of the human being as a narrowly-scoped tool who gets converted from creator into a replaceable component, whose only job is to provide conceptual input into design. Sound good at first ("computers do the boring stuff, humans do the creative stuff"), but, and it's a big but: as an artist too, I think it's absolutely true that the creative stuff can't be separated from the "boring" stuff, and when looked at properly, the "boring" stuff can actually become serene.
I know there's always the counterpoint: what about other automations? Well, I think there is a limit past which automations give diminishing returns and become counterproductive, and therefore we need to be aware of all automations, but AI is the first sort of automation that is categorically always past the point of diminishing returns, because it targets exactly the sort of cognitive features that we should be doing ourselves.
Most people here disagree with me, and frequently downvote me too on the topic of AI. But I'll say this: in a world where efficiency and productivity has become doctrine, most people have also been converted into only thinking about the advancement of the machine, and have lost the essence of soul to enjoy that which is beyond mere mental performance.
Sadly, people in the tecnhnical domain often find emotional satisfaction in new tools, and that is why anything beyond the technical is often derided by those in tech, much to their disadvantage.
But not using AI is also idiotic right now, at the very least you should be using it for autocomplete, in the _vast_ majority of cases any current leading LLM will return _far more_ than not using it (in the scope of autocomplete).
Coding agents still give you control (at least for now), but are like having really good autocomplete. Instead of using copilot to complete a line or two, using something like Cursor you can generate a whole function or class based on your spec then you can refine and tweak the more nuanced and important bits where necessary.
For example, I was doing some UI stuff the other day and in the past it would have taken a while just to get a basic page layout together when you're writing it yourself, but with a coding assistant I generated a basic page asking it to use an image mock up, a component library and some other pages as references. Then I could get on and do the fun bits of building the more novel parts of the UI.
I mean if it's code you're working on for fun then work however you like, but I don't know why someone would employ a dev working in such an inefficient way in 2025.
>I don't know why someone would employ a dev working in such an inefficient way in 2025.
It amazes me how fast the hype has taken off. There is no credible evidence that, for experienced devs, working with AI coding tools makes you significantly more productive.
Of course. So if I'm faced with some boilerplate, I try to refactor it away so it's less boilerplatey. Perhaps I'm lucky but mostly this seems to work, I don't often find myself writing boilerplate.
> I don't know why someone would employ a dev working in such an inefficient way in 2025
Am I working inefficiently? I'm not sure. How much time does the typing part of programming actually take up? I guess it varies, but it's definitely less than 50% for me. Thinking/designing/communicating/listening take most of my time. The typing part is not a bottleneck.
The majority of the code I write is not boilerplate, and writing the boilerplate myself is useful to me.
And I think that's the problem. I think autocomplete itself is a bad thing. If one has autocomplete, one is more likely to type stuff that is less valuable to be typed.
No, but I don't find debugging the LLM boilerplate that is at best 50-80% correct very fun either
I have better ways to automate boilerplate than using LLMs