When I was early in use of it I would say I sped up 4x but now after using it heavily for a long time some days it's 20% other days -20%
It's a very difficuly technology to know when you're one or the other.
The real thing to note is when you "feel" lazy and using AI you are almost certainly in the -20% category. I've had days of not thinking and I have to revert all the code from that day because AI jacked it up so much.
To get that speed up you need to be truly focused 100% or risk death by a thousand cuts.
- I think given public available metrics, it's clear that this isn't translating into more products/apps getting shipped. That could be because devs are now running into other bottlenecks, but it could also indicate that there's something wrong with these studies.
- Most devs who say AI speeds them up assert numbers much higher than what those studies have shown. Much of the hype around these tools is built on those higher estimates.
- I won't claim to have read every study, but of the ones I have checked in the past, the more the methodology impressed me the less effect it showed.
- Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.
- Review is imperfect, and LLMs produce worse code on average than human developers. That should result in somewhat lowered code quality with LLM usage (although that might be an acceptable trade off for some). The fact that some of these studies didn't find that is another thing that suggests there shortcomings in said studies.
I am not sure how much is just programmers saying "10x" because that is the meme, but if at all realistic numbers are mentioned, I see people claiming 20 - 50%, which lines up with the studies above. E.g. https://news.ycombinator.com/item?id=45800710 and https://news.ycombinator.com/item?id=46197037
> - Prior to LLMs, it was near universally accepted wisdom that you couldn't really measure developer productivity directly.
Absolutely, and all the largest studies I've looked at mention this clearly and explain how they try to address it.
> Review is imperfect, and LLMs produce worse code on average than human developers.
Wait, I'm not sure that can be asserted at all. Anecdotally not my experience, and the largest study in the link above explicitly discuss it and find that proxies for quality (like approval rates) indicate more improvement than a decline. The Stanford video accounts for code churn (possibly due to fixing AI-created mistakes) and still finds a clear productivity boost.
My current hypothesis, based on the DORA and DX 2025 reports, is that quality is largely a function of your quality control processes (tests, CI/CD etc.)
That said, I would be very interested in studies you found interesting. I'm always looking for more empirical evidence!
AI multiplied the amount of code I committed last month by 5x and it's exactly the code I would have written manually. Because I review every line.
model: Claude Sonnet 3.5/4.5 in VSCode GitHub Copilot. (GPT Codex and Gemini are good too)
There is in my case because it's just CRUD code. The pattern looks exactly like the code I wrote the month prior.
And this is where LLMs excel at, in my experience. "Given these examples, extrapolate to these other cases."