Sometimes that means I have a follow on question & iterate from there. That's fine too.
This is great if you are an experienced developer who can tell the difference between "in the ballpark" and fixable and "in the ballpark" but hopeless.
That is amazing and important.
The headline should be "LLM gives flawless responses to 48% of coding questions."
While true, Stack Overflow wasn't much different. New devs would go there, grab a chunk of code and move on with their day. The canonical example being the PHP SQL injection advise shared there for more than a decade.
What does this do to beginners that are just learning to program? Is this helping them by forcing them to become critical reviewers or harming them by being a bad role model?
It’s like the saying “the fog of war” (at best you have incomplete and flawed information). Programming is just like that
Harming them.
I told a new grad employee to write some unit tests for his code, explained the high level concepts and what I was looking for, and pointed him at some resources. He spun his wheels for weeks, and it turned out he was trying to get ChatGPT to teach him how to do it, but it would always give him wrong answers.
I eventually had to tell him, point blank, to stop using ChatGPT, read the articles, and ask me (or a teammate) if he needed help.
Answers don't exist in a vacuum. The chat interface allows feedback and corrections. Users can paste an error they're getting, or even say "it doesn't work", and GPT may correct itself or suggest an alternative.
I think we all partly learnt about code quality by having our code break things in the real world.
That's why classes are taught by professors and not undergrads. Professors are at least supposed to know what they don't know.
When students think of ChatGPT as their drunk frat bro they see doing keg stands at the Friday basement party rather than as an expert they use it differently.
> What's especially troubling is that many human programmers seem to prefer the ChatGPT answers. The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn't catch AI-generated mistakes at 39 percent.
"Additionally, this work has used the free version of ChatGPT (GPT-3.5)"
> For each of the 517 SO [Stack Overflow] questions, the first two authors manually used the SO question’s title, body, and tags to form one question prompt1 and fed that to the free version of ChatGPT, which is based on GPT-3.5. We chose the free version of ChatGPT because it captures the majority of the target population of this work. Since the target population of this research is not only industry developers but also programmers of all levels, including students and freelancers around the world, the free version of ChatGPT has significantly more users than the paid version, which costs a monthly rate of 20 US dollars.
Note that GPT-4o is now also freely available, although with usage caps. Allegedly the limit is one fifth the turns of paid Plus users, who are said to be limited to 80 turns every three hours. Which would mean 16 free GPT-4o turns per 3 hours. Though there is some indication the limits are currently somewhat lower in practice and overall in flux.
In any case, GPT-4o answers should be far more competent than those by GPT-3.5, so the study is already somewhat outdated.
I find ChatGPT more useful from a software architecture point of view and from a trivial code point of view, and least useful at the mid-range stuff.
It can write you a great regex (make sure you double-check it) and it can explain a lot of high-level concepts in insightful ways, but it has no theory of mind -- so it never responds with "It doesn't make sense to ask me that question -- what are you really trying to achieve here?", which is the kind of thing an actually intelligent software engineer might say from time to time.
I just used GPT-4o to refactor 50 files from react classes to react function components and it did so almost perfectly everytime. Some of these classes were as long as 500 loc.
I believe that AI will be a perfect programmer in the future for all niche areas. My point is that frontend will probably be the first niche to be mastered.
> AI will be a perfect programmer in the future for all NON-niche areas
There's going to be a positive/negative feedback loop that makes it hard for new languages and frameworks to gain popularity. And the lack of popularity means lack of training material being generated for the AI to learn.
When choosing a tech stack of the future, the ability for AI to pair will be a key consideration.
I've been pairing with GPT since 3.5-turbo. I run 20-100 queries a day (have an IDE integration). The improvements for GPT-4 over 3.5 are significant.
So far GPT-4o seems like a step-up for most (not all) queries I've run through it. Based on the pricing and speed, my guess is it's a smaller, more optimized model and there are some tradeoffs in that. I'm guessing we'll see a more expensive flagship model from OpenAI this year.
But honestly, these details don't really matter... Regardless of the performance and accuracy of the models today, the trend is obvious. AI will be the primary interface for writing all but the most cutting edge code.
Two years ago, I thought an AI writing code was 50 years away. Yesterday, I took a picture of an invoice on my phone, and asked GPT to recreate it in HTML and it did so perfectly.
I'm still on the fence about LLMs for coding, but from talking to friends, they primarily use it to define a skeleton of code or generate code that they can then study and restructure. I don't see many developers accepting the generate code without review.
My expectation isn’t that the AI generate correct code. The AI will be useful as an ‘agent in the loop’:
- Spec or test suite written as bullets
- Define tests and/or types
- Human intevenes with edits to keep it in the right direction
- LLM generates code, runs complier/tests
- Output is part of new context
- Repeat until programmer is happy
This should be feasible this holiday season.
- function calling: the LLM can take action
- Integration to your runtime: functions called by the LLM can run your tests, linters, compiler, etc
- Agents: the LLM can define what to do, execute a few tasks, and keep going with more tasks generated by itself
- Codebase/filesystem access: could be RAG or just ability to read files in your project
- Graceful integration of the human in the agent loop: this is just an iteration of the agent but it seems useful for it to ask inputs from the programmer. Maybe even something more sophisticated where the agent waits for the programmer to change stuff in the codebase
Google and Stack Overflow are useless here, people have different situations than I do.
I find it's worse at providing working code (much less good code), but pretty good at telling me why my code doesn't compile, which is 80% of the work anyway!
Also as you can always tell if a coding response works empirically mistakes are much more easily spotted than in other forms of LLM output.
Debugging with AI is more important than prompting. It requires an understanding of the intent which allows the human to prompt the model in a way that allows it to recognize its oversights.
Most code errors from LLMs can be fixed by them. The problem is an incomplete understanding of the objective which makes them commit to incorrect paths.
Being able to run code is a huge milestone. I hope the GPT5 generation can do this and thus only deliver working code. That will be a quantum leap.
> Q&A platforms have been crucial for the online help-seeking behav- ior of programmers. However, the recent popularity of ChatGPT is altering this trend. Despite this popularity, no comprehensive study has been conducted to evaluate the characteristics of ChatGPT’s an- swers to programming questions. To bridge the gap, we conducted the first in-depth analysis of ChatGPT answers to 517 programming questions on Stack Overflow and examined the correctness, consis- tency, comprehensiveness, and conciseness of ChatGPT answers. Furthermore, we conducted a large-scale linguistic analysis, as well as a user study, to understand the characteristics of ChatGPT an- swers from linguistic and human aspects. Our analysis shows that 52% of ChatGPT answers contain incorrect information and 77% are verbose. Nonetheless, our user study participants still preferred ChatGPT answers 35% of the time due to their comprehensiveness and well-articulated language style. However, they also overlooked the misinformation in the ChatGPT answers 39% of the time. This implies the need to counter misinformation in ChatGPT answers to programming questions and raise awareness of the risks associated with seemingly correct answers.
current openai products either use much lower parameter models under the hood than they did originally, or maybe it's a side-effect of context stretching.
Odds of correct answer within n attempts =
1 - (1/2)^n
Nice, that’s exponentially good!
People asking for 'right' answers, don't really get it. I'm sorry if that sounds abrasive, but these people give LLMs a bad name due to their own ignorance/malice.
I remember having some Amazon programmer trash LLMs for 'not being 100% accurate'. It was really an iD10t error. LLMs arent used for 100% accuracy. If you are doing that, you don't understand the technology.
There is a learning curve with LLMs, and it seems a few people still don't get it.
I think you're wrong about that. They shouldn't be, but they clearly are.
It cracks me up how consistent this is.
See post criticizing LLMs. Check if they're on the latest version (which is now free to boot!!).
Nope. Seemingly...never. To be fair, this is probably just an old study from before 4o came out. Even still. It's just not relevant anymore.
On the Humaneval (https://paperswithcode.com/sota/code-generation-on-humaneval) benchmark, GPT4 can generate code that works on first pass 76.5% of the time.
While on SWE bench (https://www.swebench.com/) GPT4 with RAG can only solve about 1% of github issues used in the benchmark.