If you told me a decade ago that I could have a fuzzy search engine on my desktop that I could use to vaguely describe some program that I needed & it would go out into the universe of publicly available source code & return something that looks as close to the thing I’ve asked for as it can find then that would have been mindblowing. Suddenly I have (slightly lossy) access to all the code ever written, if I can describe it.
Same for every other field of human endeavour! Who cares if AI can “think“ or “do new things”? What it can do is amazing & sometimes extremely powerful. (Sometimes not, but that’s the joy of new technology!)
Some people can't see past how the trick is done (take training data and do a bunch of math/statistics on it), but the fact that LLMs are able to build the thing is in-and-of-itself interesting and useful (and fun!).
Edit: for the young, wysiwyg (what you see is what you get) was common for all sorts of languages from c++ to Delphi to html. You could draw up anything you wanted. Many had native bindings to data sources of all kinds. My favourite was actually HyperCard because I learned it in grade school.
Good times!
Boilerplate generation was never, ever the bottleneck.
* Scaffolding first and foremost - It's usually fine for this, I typically ask "give me the industry standard project structure for x language as designed by a Staff level engineer" blah blah just give me a sane project structure to follow and maintain so I don't have to wonder after switching around to yet another programming language (I'm a geek, sue me).
* Code that makes sense at first glance and is easy to maintain / manage, because if you blindly take code you don't understand, you'll regret it the moment you need to be called in for a production outage and you don't know your own codebase.
I’d say it made me around 2x as productive.
I don’t think the cynicism of HN is justified, but I think what people forget is that it takes several months of really investing a lot of time into learning how to use AI well. If I see some of the prompts people give, and expect it to work, yeah no wonder that only works for React-like apps.
At the same time, these tools have helped me reduce the development time on this project by orders of magnitude. There are two prominent examples.
--- Example 1:
The first relates to internal tooling. I was debugging a gnarly problem in an interpreter. At some point I had written code to do a step-by-step dump of the entire machine state to file (in json) and I was looking through it to figure out what was going wrong.
In a flash of insight, I asked my AI service (I'll leave names out since I'm not trying to promote one over another) to build a react UI for this information. Over the course of a single day, I (definitely not a frontend dev by history) worked with it to build out a beautiful, functional, easy to use interface for browsing step-data for my VM, with all sorts of creature comforts (like if you hover over a memory cell, and the memory cell's value happens to be a valid address to another memory cell, the target memory cell gets automatically highlighted).
This single tool has reduced my debugging time from hours or days to minutes. I never would have built the tool without AI support, because I'm simply not experienced enough in frontend stuff to build a functional UI quickly.. and this thing built an advanced UI for me based on a conversation. I was truly impressed.
--- Example 2:
As part of verifying correctness for my project, I wanted to generate a set of tests that validated the runtime behaviour. The task here consists of writing a large set of reference programs, and verifying that their behaviour was identical between a reference implementation and the real implementation.
Half decent coverage meant at least a hundred or so tests were required.
Here I was able to use agentic AI to reduce the testcase construction time from a month to about a week. I asked the AI to come up with a coverage plan and write the test case ideas to a markdown file in an organized, categorized way. Then I went through each category in the test case markdown and had the AI generate the test cases and integrate them into the code.
---
I was and remain a strong skeptic of the hype around this tech. It's not the singularity, it's not "thinking". It's all pattern matching and pattern extension, but in ways so sophisticated that it feels like magic sometimes.
But while the skeptical perspective is something I value, I can't deny that there is core utility in this tech that has a massive potential to contribute to efficiency of software development.
This is a tool that we as industry are still figuring out the shape of. In that landscape you have all sorts of people trying to evangelize these tools along their particular biases and perspectives. Some of them clearly read more into the tech than is there. Others seem to be allergically reacting to the hype and going in the other direction.
I can see that there is both noise, and fundamental value. It's worth it to try to figure out how to filter the noise out but still develop a decent sense of what the shape of that fundamental value is. It's a de-facto truth that these tools are in the future of every mainstream developer.
Like 80% of writing coding is just being a glorified autocomplete and AI is exceptional at automating those aspects. Yes, there is a lot more to being a developer than writing code, but, in those instances, AI really does make a difference in the amount of time one is able to spend focusing on domain-specific deliverables.