Where are all the new houses? I admit I am not a bleeding edge seeker when it comes to software consumption, but surely a 10x increase in the industry output would be noticeable to anyone?
[0] https://www.marble.onl/posts/this_cost_170.html
[1] https://www.anthropic.com/engineering/building-c-compiler
You can't saw faster than the wood arrives. Also the layout of the whole job site is now wrong and the council approvals were the actual bottleneck to how many houses could be built in the first place... :/
Coding speed was never really a bottleneck anywhere I have worked - it’s all the processes around it that take the most time and AI doesn’t help that much there.
If I may hijack your analogy, it would be like if all the construction crews got really fast at their work, so much so that the city decided to go for an “iterative construction” strategy because, in isolation, the cost of one team trying different designs on-site until they hit on one they liked became very small compared to the cost of getting city planners and civil engineers involved up-front. But what wasn’t considered was the rework multiplier effect that comes into play when the people building the water, sewage, electricity, telephones, roads, etc. are all repeatedly tweaking designs with minimal coordination amongst each other. So then those tweaks keep inducing additional design tweaks and rework on adjacent contractors because none of these design changes happen in a vacuum. Next thing you know all the houses are built but now need to be rewired because the electricity panel is designed for a different mains voltage from the drop and also it’s in the wrong part of the house because of a late change from overhead lines in the alleys to underground lines below the street.
Many have observed that coding agents lack object permanence so keeping them on a coherent plan requires giving them such a thoroughly documented plan up front. It actually has me wondering if optimal coding agent usage at scale resembles something of a return to waterfall (probably in more of a Royce sense than the bogeyman agile evangelists derived from the original idea) where the humans on the team mostly spend their time banging out systems specifications and testing protocols, and iteration on the spec becomes somewhat more removed from implementing it than it is in typical practice nowadays.
I don’t see AI helping with knowing what to build at all and I also don’t see AI finding novel approaches to anything.
Sure, I do think there is some unrealized potential somewhere in terms of relatively low value things nobody built before because it just wasn’t worth the time investment – but those things are necessarily relatively low value (or else it would have been worth it to build it) and as such also relatively limited.
Software has amazing economies of scale. So I don’t think the builder/tool analogy works at all. The economics don’t map. Since you only have to build software once and then it doesn’t matter how often you use it (yeah, a simplification) even pretty low value things have always been worth building. In other words: there is tons of software out there. That’s not the issue. The issue is: what it the right software and can it solve my problems?
The problem with this that after doing this hard work someone can just copy easily your hard work and UI/UX taste. I think distribution will be very important in the future.
We might end up that in future that you have already in social media where influencers copy someones post/video and not giving credits to original author.
At the same time I see people claiming 100x increases and how they produce 15k lines of code each day thanks to AI, but all I can wonder is how these people managed to find 100x work that needed to be done.
So now need to think of different kind of ideas, something on line of games that may take multiple iteration to get perfected.
Headline features aren't much faster. You still need to gather requirements, design a good architecture, talk with stakeholders, test your implementation, gather feedback, etc. Speeding up the actual coding can only move the needle so much.
I've got a couple Claude Code skills set up where I just copy/paste a Slack link into it and it links people relevant docs, gives them relevant troubleshooting from our logs, and a hook on the slack tools appends a Claude signature to make sure they know they weren't worth my time.
That said, there's this weird quicksand people around me get in where they just spend weeks and weeks on their AI tools and don't actually do much of anything? Like bro you burned your 5 hour CC Enterprise limit all week and committed...nothing?
https://www.reddit.com/r/pcgaming/comments/1pl7kg1/over_1900...
I created a platform for a virtual pub quiz for my team at my day job, built multiple pandingpages for events, debugged dark table to recognize my new camera (it was to new to be included in the camera.xml file, but the specs were known). I debugged quite a few parts of a legacy shitshow of an application, did a lot of infrastructure optimization and I also created a massive ton of content as a centaur in dialog with the help of Claude Code.
But I don't do "Show HN" posts. And I don't advertise my builds - because other than those named, most are one off things, that I throw away after this one problem was solved.
To me code became way more ephemeral.
But YMMV - and that is a good thing. I also believe that way less people than the hype bubble implies are actually really into hard core usage like Pete Steinberger or Armin Ronacher and the likes.
I use AI/agents in quite similar ways, and even rekindled multiple personal projects that had stalled. However, to borrow OPs parlance, these are not "houses" - more like sheds and tree-houses. They are fun and useful, but not moving the needle on housing stock supply, so to speak.
People haven't noticed because the software industry was already mostly unoriginal slop, even prior to LLMs, and people are good at ignoring unoriginal slop.
To be honest, I think the surrounding paragraph lumps together all anti-AI sentiments.
For example, there is a big difference between "all AI output is slop" (which is objectively false) and "AI enables sloppy people to do sloppy work" (which is objectively true), and there's a whole spectrum.
What bugs me personally is not at all my own usage of these tools, but the increase in workload caused by other people using these tools to drown me in nonsensical garbage. In recent months, the extra workload has far exceeded my own productivity gains.
For the non-technical, imagine a hypochondriac using chatgpt to generate hundreds of pages of "health analysis" that they then hand to their doctor and expect a thorough read and opinion of, vs. the doctor using chatgpt for sparring on a particular issue.
https://en.wikipedia.org/wiki/Brandolini%27s_law
>The amount of energy needed to refute bullshit is an order of magnitude bigger than that needed to produce it.
Small and mid sized companies are getting custom software now.
Small software is able to be packed with extra features instead of bare minimum.
rather than new stuff for everyone to use, the future could easily be everyone building their own bespoke tools for their own problems.
> You have to turn off the sandbox, which means you have to provide your own sandbox. I have tried just about everything and I highly recommend: use a fresh VM.
> I am extremely out of touch with anti-LLM arguments
'Just pay out the arse and run models without a sandbox or in some annoying VM just to see them fail. Wait, some people are against this?'
So why not just wait out this insane initial phase, and if anything is left standing afterwards and proves itself, just learn that.
The anti-LLM arguments aren't just "hand tools are more pure." I would even say that isn't even a majority argument. There are plenty more arguments to make about environmental and economic sustainability, correctness, safety, intellectual property rights, and whether there are actual productivity gains distinguishable from placebo.
It's one of the reasons why "I am enjoying programming again" is such a frustrating genre of blog post right now. Like, I'm soooo glad we could fire up some old coal plants so you could have a little treat, Brian from Middle Management.
I beg to differ. There are a whole lot of folks with astonishingly incomplete understanding about all the facts here who are going to continue to make things very, very complicated. Disagreement is meaningless when the relevant parties are not working from the same assumption of basic knowledge.
There’s a lot of unwillingness to even attempt to try the tools.
"Anti-LLM sentiment" within software development is nearly non-existent. The biggest kind of push-back to LLMs that we see on HN and elsewhere, is effectively just pragmatic skepticism around the effectiveness/utility/ROI of LLMs when employed for specific use-cases. Which isn't "anti-LLM sentiment" any more than skepticism around the ability of junior programmers to complete complex projects is "anti-junior-programmer sentiment."
The difference between the perspectives you find in the creative professions vs in software dev, don't come down to "not getting" or "not understanding"; they really are a question of relative exposure to these pro-LLM vs anti-LLM ideas. Software dev and the creative professions are acting as entirely separate filter-bubbles of conversation here. You can end up entirely on the outside of one or the other of them by accident, and so end up entirely without exposure to one or the other set of ideas/beliefs/memes.
(If you're curious, my own SO actually has this filter-bubble effect from the opposite end, so I can describe what that looks like. She only hears the negative sentiment coming from the creatives she follows, while also having to dodge endless AI slop flooding all the marketplaces and recommendation feeds she previously used to discover new media to consume. And her job is one you do with your hands and specialized domain knowledge; so none of her coworkers use AI for literally anything. [Industry magazines in her field say "AI is revolutionizing her industry" — but they mean ML, not generative AI.] She has no questions that ChatGPT could answer for her. She doesn't have any friends who are productively co-working with AI. She is 100% out-of-touch with pro-LLM sentiment.)
Strong disagree right there. I remember talking to a (developer) coworker a few months ago who seemed like the biggest AI proponent on our team. When we were one-on-one during a lunch though, he revealed that he really doesn't like AI that much at all, he's just afraid to speak up against it. I'm in a few Discord channels with a lot of highly skilled (senior and principal programmers) who mostly work in game development (or adjacent), and most of them either mock LLMs or have a lot of derision for it. Hacker News is kind of a weird pro-AI bubble, most other places are not nearly as keen on this stuff.
I see it all the time in professional and personal circles. For one, you are shifting the goalpost on what is “anti-llm”, two, people are talking about the negative social, political and environmental impacts.
What is your source here?
I like this. What's more, while AI-generated art has a characteristic sameyness to it, the human-produced art stands out in its originality. It has character and soul. Even if it's bad! AI slop has made the human-created stuff seem even more striking by comparison. The market for human art isn't going anywhere, just like the audience for human-played chess went nowhere after Deep Blue. I think people will pay a premium for it, just to distinguish themselves from the slop. The same is true of writing and especially music. I know of no one who likes listening to AI-generated music. Even Sabrina Carpenter would raise less objection.
The same, I'm afraid, cannot be said for software—because there is little value for human expression in the code itself. Code is—almost entirely—strictly utilitarian. So we are now at an inflection point where LLMs can generate and validate code that's nearly as good, if not better, than what we can produce on our own. And to not make use of them is about as silly as Mel Kaye still punching in instruction opcodes in hex into the RPC-4000, while his colleagues make use of these fancy new things called "compilers". They're off building unimaginably more complex software than they could before, but hey, he gets his pick of locations on the rotating memory drum!
I'm one of the nonexistent anti-LLMers when it comes to software. I hate talking to a clanker, whose training data set I don't even have access to let alone the ability to understand how my input affects its output, just to do what I do normally with the neural net I've carried around in my skull and trained extensively for this very purpose. I like working directly with code. Code is not just a product for me; it is a medium of thought and expression. It is a formalized notation of a process that I can use to understand and shape that process.
But with the right agentic loops, LLMs can just do more, faster. There's really no point in resisting. The marginal value of what I do has just dropped to zero.
This is certainly untrue. I want to say "obviously", which means that maybe I am misunderstanding you. Below are some examples of negative sentiments programmers have - can you explain why you are not counting these?
NOTE: I am not presenting these as an "LLMs are bad" argument. My own feelings go both ways. There is a lot that's great about LLMs, and I don't necessarily agree with every word I've written below - some of it is just my paraphrasing of what other people say. I'm only listing examples of what drives existing anti-LLM sentiment in programmers.
1. Job loss, loss of income, or threat thereof
These two are exacerbated by the pace of change, since so many people already spent their lives and money establishing themselves in the career and can't realistically pivot without becoming miserable - this is the same story for every large, fast change - though arguably this one is very large and very fast even by those standards. Lots of tech leadership is focusing even more than they already were on cheap contractors, and/or pushing employees for unrealistic productivity increases. I.e. it's exacerbating the "fast > good" problem, and a lot of leadership is also overestimating how far it reduces the barrier to creating things, as opposed to mostly just speeding up a person's existing capabilities. Some leadership is also using the apparent loss of job security as leverage beyond salary suppression (even less proportion of remote work allowed, more surveillance, worse office conditions, etc).
2. Happiness loss (in regards to the job itself, not all the other stuff in this list)
This is regarding people who enjoy writing/designing programs but don't enjoy directing LLMs; or who don't enjoy debugging the types of mistakes LLMs tend to make, as opposed to the types of mistakes that human devs tend to make. For these people, it's like their job was forcibly changed to a different, almost unrelated job, which can be miserable depending on why you were good at - or why you enjoyed - the old job.
3. Uncertainty/skepticism
I'm pushing back on your dismissal of this one as "not anti-LLM sentiment" - the comparison doesn't make sense. If I was forced to only review junior dev code instead of ever writing my own code or reviewing experienced dev code, I would be unhappy. And I love teaching juniors! And even if we ignore the subset of cases where it doesn't do a good job or assume it will soon be senior-level for every use case, this still overlaps with the above problem: The mistakes it makes are not like the mistakes a human makes. For some people, it's more unnatural/stressful to keep your eyes peeled for the kinds of mistakes it makes. For these people, it's a shift away from objective, detail-oriented, controlled, concrete thinking; away from the feeling of making something with your hands; and toward a more wishy-washy creation experience that can create a feeling of lack of control.
4. Expertise loss
A lot of positive outcomes with LLMs come from being already experienced. Some argue this will be eroded - both for new devs and existing experienced devs.
5. The training data ownership/morality angle
But secondly, there's an entire field of LLM-assisted coding that's being almost entirely neglected and that's code autocomplete models. Fundamentally they're the same technology as agents and should be doing the same thing: indexing your code in the background, filtering the context, etc, but there's much less attention and it does feel like the models are stagnating.
I find that very unfortunate. Compare the two workflows:
With a normal coding agent, you write your prompt, then you have to at least a full minute for the result (generally more, depending on the task), breaking your flow and forcing you to task-switch. Then it gives you a giant mass of code and of course 99% of the time you just approve and test it because it's a slog to read through what it did. If it doesn't work as intended, you get angry at the model, retry your prompt, spending a larger amount of tokens the longer your chat history.
But with LLM-powered auto-complete, when you want, say, a function to do X, you write your comment describing it first, just like you should if you were writing it yourself. You instantly see a small section of code and if it's not what you want, you can alter your comment. Even if it's not 100% correct, multi-line autocomplete is great because you approve it line by line and can stop when it gets to the incorrect parts, and you're not forced to task switch and you don't lose your concentration, that great sense of "flow".
Fundamentally it's not that different from agentic coding - except instead of prompting in a chatbox, you write comments in the files directly. But I much prefer the quick feedback loop, the ability to ignore outputs you don't want, and the fact that I don't feel like I'm losing track of what my code is doing.
But if you try some penny-saving cheap model like Sonnet [..bad things..]. [Better] pay through the nose for Opus.
After blowing $800 of my bootstrap startup funds for Cursor with Opus for myself in a very productive January I figured I had to try to change things up... so this month I'm jumping between Claude Code and Cursor, sometimes writing the plans and having the conversation in Cursor and dump the implementation plan into Claude.Opus in Cursor is just so much more responsive and easy to talk to, compared to Opus in Claude.
Cursor has this "Auto" mode which feels like it has very liberal limits (amortized cost I guess) that I'm also trying to use more, but -- I don't really like to flip a coin and if it lands up head then waste half hour discovering the LLM made a mess the LLM and try again forcing the model.
Perhaps in March I'll bite the bullet and take this authors advice.
You can enjoy it while it lasts, OpenAI is being very liberal with their limits because of CC eating their lunch rn.
I was spending unholy amounts of money and tokens (subsidized cloud credits tho) forcing Opus for everything but I’m very happy with this new setup. I’ve also experimented with OpenCode and their Zen subscription to test Kimi K2.5 an similar models and they also seem like a very good alternative for some tasks.
What I cannot stand tho is using sonnet directly (it’s fine as a subagent), I’ve found it to be hard to control and doesn’t follow detailed instructions.
This vscode extension makes it almost as easy to point codex to something as when doing it in cursor:
https://github.com/suzukenz/vscode-copy-selection-with-line-...
I think we are going to start hearing stories of people going into thousands in CC debt because they were essentially gambling with token usage thinking they would hit some startup jackpot.
Startup is a gamble with or without the LLM costs.
I have been coding for 20 years, I have a good feel for how much time I would have spent without LLM assistance. And if LLMs vanish from the face of the earth tomorrow, I still saved myself that time.
It's 90 percent the same thing as Claude but with flat-rate costs.
"Using anything other than the frontier models is actively harmful" - so how come I'm getting solid results from Copilot and Haiku/Flash? Observe, Orient, Decide, Act, Review, Modify, Repeat. Loops with fancy heuristics, optimized prompts, and decent tools, have good results with most models released in the past year.
We're at the point where copilot is irrelevant. Your way of working is irrelevant. Because that's not how you interact with coding AIs anymore, you're chatting with them about the code outside the IDE.
Just this month I've burned through 80% of my Copilot quota of Claude Opus 4.6 in a couple of days to get it to help me with a silly hobby project: https://github.com/ncruces/dbldbl
It did help. The project had been sitting for 3 years without trig and hyperbolic trig, and in a couple days of spare time I'm adding it. Some of it through rubber ducking chat and/or algorithmic papers review (give me formulas, I'll do it), some through agent mode (give me code).
But if you review the PR written in agent mode, the model still lies to my face, in trivial but hard to verify ways. Like adding tests that say cosh(1) is this number at that OEIS link, and both the number and the OEIS link are wrong, but obviously tests pass because it's a lie.
I'm not trying to bash the tech. I use it at work in limited but helpful ways, and use hobby stuff like this as a testbed precisely to try to figure out what they're good at in a low stakes setting.
But you trust the plausibly looking output of these things at your own peril.
If you check the docs, smaller, faster, older models are recommended for 'lightweight' coding. There's several reasons for this. 1) a smaller model doesn't have as good deep reasoning, so it works okay for a simple ask. 2) small context, small task, small model can produce better results than big context, big task, big model. The lost-in-the-middle problem is still unsolved, leading to mistakes that get worse with big context, and longer runs exacerbate issues. So small context/task that ends and starts a new loop (with planning & learning) ends up working really well and quickly.
There's a difference between tasks and problem-solving, though. For difficult problems, you want a frontier reasoning model.
Yes.
> It's hard to communicate the difference the last 6 months has seen.
No, it isn't. The hypebeast discovered Claude code, but hasn't yet realized that the "let the model burn tokens with access to a shell" part is the key innovation, not the model itself.
I can (and do) use GH Copilot's "agent" mode with older generation models, and it's fine. There's no step function of improvement from one model to another, though there are always specific situations where one outperforms. My current go-to model for "sit and spin" mode is actually Grok, and I will splurge for tokens when that doesn't work. Tools and skills and blahblahblah are nice to have (and in fact, part of GH Copilot now), but not at all core to the process.
I took a look at the result and its maybe half of stuff missing completely, rest is cryptic. I know that codebase by heart since I created it. From my 20+ years of experience correcting all this would take way more effort than manual rewrite from scratch by a senior. Suffice to say thats not what upper management wants to hear, llm adoption often became one of their yearly targets to be evaluated against. So we have a hammer and looking for nails to bend and crook.
Suffice to say this effort led nowhere since we have other high priority goals, for now. Smaller things here & there, why not. Bigger efforts, so far sawed-off 2-barrel shotgun loaded with buckshot right into both feet.
I used claude code to port rust pdb parsing library to typescript.
My SumatraPDF is a large C++ app and I wanted visibility into where does the size of functions / data go, layout of classes. So I wanted to build a tool to dump info out of a PDB. But I have been diagnosed with extreme case of Rustophobiatis so I just can't touch rust code. Hence, the port to typescript.
With my assistance it did the work in an afternoon and did it well. The code worked. I ran it against large PDB from SumatraPDF and it matched the output of other tools.
In a way porting from one language to another is extreme case of refactoring and Claude did it very well.
I think that in general (your experience notwithstanding) Claude Caude is excellent at refactorings.
Here are 3 refactorings from SumatraPDF where I asked claude code to simplify code written by a human:
https://github.com/sumatrapdfreader/sumatrapdf/commit/a472d3... https://github.com/sumatrapdfreader/sumatrapdf/commit/5624aa... https://github.com/sumatrapdfreader/sumatrapdf/commit/a40bc9...
I hope you agree the code written by Claude is better than the code written by a human.
Granted, those are small changes but I think it generalizes into bigger changes. I have few refactorings in mind I wanted to do for a long time and maybe with Claude they will finally be feasible (they were not feasible before only because I don't have infinite amount of time to do everything I want to do).
If that is true, why should one invest in learning now rather than waiting for 8 months to learn whatever is the frontier model then?
I think you (and others) might be misunderstanding his statement a bit. He's not saying that using an old model is harmful in the sense that it outputs bad code -- he's saying it's harmful because some of the lessons you learn will be out of date and not apply to the latest models.
So yes, if you use current frontier models, you'll need to recalibrate and unlearn a few things when the next generation comes out. But in the meantime, you will have gotten 8 months (or however long it takes) of value out of the current generation.
But if you do want to use LLMs for coding now, not using the best models just doesn't make sense.
Using agents that interact with APIs represents people being able to own their user experience more. Why not craft a frontend that behaves exactly the the way YOU want it to, tailor made for YOUR work, abstracting the set of products you are using and focusing only on the actual relevant bits of the work you are doing? Maybe a downside might be that there is more explicit metering of use in these products instead of the per-user licensing that is common today. But the upside is there is so much less scope for engagement-hacking, dark patterns, useless upselling, and so on.
OK, but: that's an economic situation.
> so much less scope for engagement-hacking, dark patterns, useless upselling, and so on.
Right, so there's less profit in it.
To me it seems this will make the market more adversarial, not less. Increasing amounts of effort will be expended to prevent LLMs interacting with your software or web pages. Or in some cases exploit the user's agentic LLM to make a bad decision on their behalf.
We're already seeing this with search. Ask an LLM "what tools do X" and the answer depends heavily on structured data, citation patterns, and how well your docs/content map to the LLM's training. Companies with great API docs but zero presence in the training data just won't exist to these agents.
So it's not just "API docs = product" -- it's more like "machine-legible presence = existence." Which is a weird new SEO-like discipline that barely has a name yet.
I think this is a neglected area that will see a lot of development in the near future. I think that even if development on AI models stopped today - if no new model was ever trained again - there are still decades of innovation ahead of us in harnessing the models we already have.
Consider ChatGPT: the first release relied entirely on its training data to answer questions. Today, it typically does a few Google searches and summarizes the results. The model has improved, but so has the way we use it.
How I program with agents - https://news.ycombinator.com/item?id=44221655 - June 2025 (295 comments)
I see this a lot here
Copyright law, education, just the sheer scale of things changing because of LLMs are some things off the top of my head why "power tools vs carpentry" is a bad analogy.
Writing code has never been the limiting factor, it's everything else that goes into it.
Like, I don't mind that there's a bunch of weekend warriors out here building shoddy gazebos and sheds with their brand new overpriced tools, incorrecting each other on the best way to do things. We had that with the bitcoin and NFT bros already.
What I do roll my eyes at is when the bros start talking about how they're totally going to build bridges and planes and it's gonna be soooo easy to get to new places, just slap down a bridge.
Uh huh. Y'all do not understand what building those actually entails lol.
Sure, replace me with AI, but I better get royalties on my public contributions. I like many other developers have kids and other responsibilities to pay for.
We did not share our work publicly to be replaced. The same way I did not lend my neighbour my car so he could run me over, that was implicit.
The 'fear' is about losing ones livelihood and getting locked out of homeownership and financial security. its not complicated. life is actually largely determined by your access to capital, despite whatever fresh coping strategy the afflicted (and the afflicting) like to peddle.
the quality of life versus capital availability is very non-linear. there is a step-change around the $500k mark where you reach 'orbital velocity', where as long as you dont suffer severe misfortune or make mistakes, you will start accelerating upwards (albeit very slowly.)
under that line, you are constantly having to fight 'gravity'.
basically everyone in tech is openly or quietly aiming to get there, and LLMs have made that trek ever more precarious than before.
Not a plug but really that’s exactly why we’re building sandboxes for agents with local laptop quality. Starting with remote xcode+sim sandboxes for iOS, high mem sandbox with Android Emulator on GPU accel for Android.
No machine allocation but composable sandboxes that make up a developer persona’s laptop.
If interested, a quick demo here https://www.loom.com/share/c0c618ed756d46d39f0e20c7feec996d
muvaf[at]limrun[dot]com
> That was a net benefit to the world, that we all don't have to work to eat.
I’m pretty sure most all of us are still working to have food to eat and shelter for ourselves and our families.
Also, while the on-going industrial and technological revolution has certainly brought benefits, it’s an open question as to whether it will turn out to be a net benefit. There’s a large-scale tragedy of the commons experiment playing out and it’s hard to say what the result will be.
It might be just me but this reads as very tone deaf. From my perspective, CEOs are seething at the mouth to make as many developers redundant as possible, not being shy about this desire. (I don't see this at all as inevitable, but tech leaders have made their position clear)
Like, imagine the smugness of some 18th century "CEO" telling an artisan, despite the fact that he'l be resigned to working in horrific conditions at a factory, to not worry and think of all the mass produced consumer goods he may enjoy one day.
It's not at all a stretch of the imagination that current tech workers may be in a very precarious situation. All the slopware in the world wouldn't console them.
While the idea of programmers working two hours a day and spending the rest of it with their family seems sunny, that's absolutely not how business is going to treat it.
Thought experiment... CEO has a team of 8 engineers. They do some experiments with AI, and they discover that their engineers are 2x more effective on average . What does the CEO do?
a) Change the workweek to 4 hours a day so that all the engineers have better work/life balance since the same amount of work is being done.
b) Fire half the engineers, make the 4 remaining guys pick up the slack, rinse and repeat until there's one guy left?
Like, come on. There's pushback on this stuff not because the technology is bad, (although it's overhyped), but because the no sane person trusts our current economic system to provide anything resembling humane treatment of workers. The super rich are perfectly fine seeing half the population become unemployed, as far as I can tell, as long as their stock numbers go up.
Though at the same time I also think a lot of the CEO-types (at least in the pure software world) who believe they are going to capture the value of this productivity shift are also in for a rude awakening because if AI doesn't stall out, its only a matter of time from when their engineers are replaceable to when their company doesn't need to exist at all anymore.
"AI won't replace you. The guy who's about to get fired but has more to lose is going to replace you."
But if AI keeps getting better at code, it will produce entire in-silico simulation workflows to test new drugs or even to design synthetic life (which, again, could make us all die, or worse). Yet there is a tiny, tiny chance we will use it to fix some of the darkest aspects of human existence. I will take that.
We have a lot of actual problems to deal with that aren't telling ghost stories about sand. Focus on those.
Much of my learning still requires experimentation - including lots of token volume so hitting limits is a problem.
And secondly I’m looking for workflows that build the thing without needing to be at the absolute edge of the LLM capability. Thats where fragility and unpredictability live. Where a new model with slightly different personality is released and it breaks everything. I’d rather have flow that is simple and idiot proof that doesn’t fall apart at the first sign of non-bleeding edge tokens. That means skipping the gains from something opus could one shot ofc but that’s acceptable to me
I don't think it is the best way to look at it. I think that now every team has the power to build and maintain an internal agent (tool + UX) to manager software products. I don't necessarily think that chat-only is enough except for small projects, so teams will build agent that gives them access to the level of abstraction that works best.
It's a data point but this weekend (e.g. in 2 days) I build a desktop + web agent that is able to help me reason on system design and code. Built with Codex powered by the Codex SDK. It is high quality. I've been a software engineer and director of engineering for 10 years. I'm blown away.
It's always the CTO types who get most enthusiastic.
I have yet to do this and see any other year. Was there someone who bought a ton of accounts in 2011 to farm them out? A data breach? Was 2011 just a very big year for new users? (My own account is from 2011)
I like Claude Code too btw.
The crazy thing here is that I wrote the initial comment myself!
Calling it bot is a bit dismissive though. It's an agent!
Not sure which camp I'm in, but I enjoyed the imagery.
Wow I know that feel.
I'm here using LLM for daily work and even hobbies in very conservative manners and didn't think much of it.
Now when I have casual discussions with other folks, especially non-tech people, the visceral hatred I get for even mentioning AI and the fact that I use it is insane. There's like an entire sub group of people who are so out of touch with these tools they think they're the devil like the anti-GMO crazies and the PETA psychos.
I agree with this and I think it's funny to see people publish best practices for working with AI that are like, "Write a clear spec. Have a style guide. Use automated tests."
I'm not convinced it's 100% true because I think there are code patterns that AI handles better than humans and vice versa. But I think it's true enough to use as a guiding philosophy.
My conclusion as well. It feels paradoxical, maybe because on some level I still think of an LLM as some weird gadget, not a coworker. Context ephemerality is more or less the only veritable difference from a human programmer, I'd say. And, even then, context introduction with LLMs is a speedrun of how you'd do it with new human members of a project. Awesome times we live in.
As the author says, there's nothing wrong with the idea of the IDE. Of course you want to be using the best, most powerful tools!
AI showed us that our current-gen text-editor-first IDEs are massively underserving the needs of the public, yes, but it didn't really solve that problem. We still need better IDEs! What has changed is that we now understand how badly we need them. (source: I am an IDE author)
Just a question? What IDE feature is obsolete now? Ability to navigate the code? Integration with database, Docker, JIRA, Github (like having PR comments available, listed, etc), Git? Working with remote files? Building the project?
Yes, I can ask copilot to build my project and verify tests results, but it will eat a lot of tokens and added value is almost none.
The added value is that it can iterate autonomously and finish tasks that it can't one-shot in its first code edit. Which is basically all tasks that I assign to Copilot.
The added value is that I get to review fully-baked PRs that meet some bar of quality. Just like I don't review human PRs if they don't pass CI.
Fully agree on IDEs, though. I absolutely still need an IDE to iterate on PRs, review them, and tweak them manually. I find VSCode+Copilot to be very good for this workflow. I'm not into vibe coding.
Visual C++ 6 was incredible! My favourite IDE of all time too.
┌────────────────────────────┐
│ User │
└──────────────┬─────────────┘
│
▼
┌────────────────────────────┐
│ Agent Harness │
│ (software interface) │
└──────┬──────────────┬──────┘
│ │
▼ ▼
┌────────────┐ ┌────────────┐
│ Models │ │ Tools │
└────────────┘ └────────────┘
Here's an example of a harness with less code: https://github.com/badlogic/pi-mono/blob/fdcd9ab783104285764...We have built two of them now, and clearly the state of the art here can be improved. But it is hard to push too much on this while the models keep improving.
the hard part isn't the loop itself — it's everything around failure recovery.
when a browser agent misclicks, loads a page that renders differently than expected, or hits a CAPTCHA mid-flow, the 9-line loop just retries blindly. the real harness innovation is going to be in structured state checkpointing so the agent can backtrack to the last known-good state instead of restarting the whole task. that's where the gap between "works in a demo" and "works on the 50th run" lives.
Then yeah, it makes sense.
That's why. I was using Claude the other day to greenfield a side project and it wanted to do some important logic on the frontend that would have allowed unauthenticated users to write into my database.
It was easy to spot for me, because I've been writing software for years, and it only took a single prompt to fix. But a vibe coder wouldn't have caught it and hackers would've pwned their webapp.
(1) Tooling to enable better evaluation of generated code and its adherence to conventions and norms (2) Process to impose requirements on the creation/exposure of PRDs/prompts/traces (3) Management to guide devs in the use of the above and to implement concrete rewards and consequences
Some organizations will be exposed as being deficient in some or all of these areas, and they will struggle. Better organizations will adapt.
+1. I've tried many times, and failed, to replicate the joy of using that toolchain.
My clipart folder of that kid with the lolipop continues to stay relevant
The jury's still out on that one, because climate change is an existential risk.
>ah, they're so dumb, they don't get it, the anti-LLM people
This is one of the reasons I see AI failing in the short term. If I call you an idiot, are you more or less likely to be open minded and try what I'm selling? AI isn't making money, 95% of companies are failing with AI
https://fortune.com/2025/08/18/mit-report-95-percent-generat...
I mean, your AIs might be a lot more powerful if it was generating money, but that's not happening. I guess being condescending to the 95% of potential buyers isn't really working out.
Not obvious
> To me that statement is as obvious as "water is wet".
Well... is water *wet* or does it *wet things*? So not obvious either.
I'm really dubious when reading posts posing some things as obvious or trivial. In general they are not.
Water is not wet. Water makes things wet. Perhaps the inaccuracy of that statement should be taken as a hint that the other statements that you hold on the same level are worthy of reconsideration.
First, we currently have 4 frontier labs, and a bunch of 2nd tier ones following. The fact that we don't have just oAI or just Anthropic or just Google is good in the general sense, I would say. The 4 labs racing each other and trading SotA status for ~a few weeks is good for the end consumer. They keep each other honest and keep the prices down. Imagine if Anthropic could charge 60$ /MTok or oAI could charge 120$ /MTok for their gpt4 style models. They can't in good part because of the competition.
Second, there's a bunch of labs / companies that have released and are continuing to release open mdoels. That's as close to "intelligence on tap" as you can get. And those models are ~6-12 months behind the SotA models, depending on your usecase. Even though the labs have largely different incentives to do so, a lot of them are still releasing open models. Hopefully that continues to hold. So not all control will be in the hands of big tech, even if the "best" will still be theirs. At some point "good enough" is fine.
There's also the thing about geopolitics being involved in this. So far we've seen the EU jumping the gun on regulation, and we're kinda sorta paying for it. Everyone is still confused about what can or cannot be done in the EU. The US seems to be waiting to see what happens, and China will do whatever they do. The worst thing that can happen is that at some point the big players (Anthropic is the main driver) push for regulatory capture. That would really suck. Thankfully atm there's this lingering thinking that "if we do it, the others won't so we'll be on the back foot". Hopefully this holds, at least until the "good enough" from above is out :)
The AI labs started down this path using the Manhattan Project as a metaphor and guess what? It's a good metaphor and we should embrace most of the wider implications of that (though I'd love to avoid all the MAD/cold war bullshit this time).
Or less.
And I don't think it's collar color they're going to be checking against.
So I guess I'm saying I agree that this is powerful and dangerous. These are language models, so they're more effective against humans and their languages. And self-preservation, empathy, humanity do not play a role as there is nobody in there to be offended at the notion of intentionally killing more than 9/10 of humanity… for some definitions of humanity, ones I'm sympathetic to.