Explain to them how to debug and fix the code they've written.
Which is pretty much what you would do with an inexperienced human software developer.
Looking at this with fresh eyes, it's both shocking to me that this sort of thing is even possible, and yet also completely unsurprising as yet another emergent capability of LLMs.
We live in interesting times.
Beware of bugs in the above code; I have only proved it correct, not tried it. - Knuth
You can teach GPT-3 arithmetic - https://imgur.com/a/w3DAYOi
Basically 100% accuracy up to about 13 digit addition and >90 after that.
What else can you teach GPT without changing weights ?
This is such a circular thing, that I feel like it is amazing to see it.
The reason LLMs use a NN is because they're trying to encode a probability function for generating the passage.
And now, you are encoding another n-gram follower exercise (i.e 1+1 = 2) on top of it :)
The graphs you just posted do not support that, they'd support at most 100% accuracy up to 4 digits.
It usually does notice inconsistencies between A and B when asked this. But its ways of reconciling inconsistencies can be bizarre and suggest a very superficial understanding of concepts.
For example, it once reconciled an inconsistency by saying that, yes, 2 * 2 = 4, but if you multiply both sides of that equation by a big number, that's no longer true.
I will be super impressed the day we have a model that can read an arithmetic textbook and come out with reliable arithmetic skills.
Fair enough, have you explained it the axioms of arithmetic? It only has memorized examples that it has seen, it has a right to be skeptical until it's seen our axioms and proofs about what is always true in mathematics.
When I was a child I was skeptical that an odd number + an even number is always odd etc for very large numbers until I saw it proven to me by induction (when I was 6, I think, imo this was reasonable skepticism).
Now, ChatGPT probably has seen these proofs, to be fair, but it may not be connecting the dots well enough yet. I would expect this in a later version that has been specifically trained to understand math (by which I really mean math, and not just performing calculations. And, imagine what things will prove for us then!)
I think one problem with these models is that all their knowledge is soft. They never learn true, universal rules. They seem to know the rules of grammar, but only because they stick to average-sounding text, and the average text is grammatical. At the edges of the distribution of what they've seen, where the data is thin, they have no rules for how to operate, and their facade of intelligence quickly falls apart.
People can reliably add numbers they've never seen before. The idea that it would matter whether the number has been seen before seems ridiculous and fundamentally off-track, doesn't it? But for GPT, it's a crapshoot, and it gets worse the farther it gets away from stuff it's seen before.
But if anyone wants to give it to me for free, I would happily make a $1000 bet that I can get GPT-4 to make the same mistake.
Like as far as the model is concerned, how can it distinguish between the task being "do your best but if you do make an error, correct it" and "make some mistakes like in this example and then fix them".
For decades in reinforcement learning we've had Q learning, which promises to solve any optimization problem if only we can build a powerful enough function approximator. It can even learn off-policy, meaning it can just watch from the sideline and find the optimal solution. It works for toy problems, and it works in theory, theres even formal proofs that it will work given infinite time and resources, and yet in practice it often becomes unstable and collapses.
Supervised learning is one thing, having a model remain stable while bootstrapping through a complex environment is another. GTP is supervised learning, so far, let's see if it can bootstrap.
Putting aside the incongruity of Google researchers using the OpenAI model, I'm curious how GPT-4 would do in this situation. Probably its zero shot attempts at coding would be better, and maybe its self criticisms would be better too.
I’ll admit that I only have had time so far to read the abstract, and I’m not sure what their baseline is, but a 2-3% improvement doesn’t sound like a quantum leap forward that you’d expect from the title. Heck, I’d think that’s likely within expected sampling errors.
I’m not sure about others’ experience and, while I keep reading articles showing impressive seeming examples, my few forays into attempting to get ChatGPT to write code were actually completely useless. Even with follow on prompts to correct itself.
The other day I asked it what covid case fatality rates were in 2020. After all the various opinions at the time, I was curious to see what it was pre-vaccine. It would alternately tell me that it couldn’t give me data for 2020 because it only had data up to Sep. 2021, and then give me wildly varying numbers.
Is this a Rocko’s Basilisk trying to lure me into a false sense of security… haha.
With respect to GPT etc. as a copilot, the current dialogue seems to focus on "ask for GPT to generate code to do X" then "just paste in the error message to fix bugs in the code GPT generates"
A.) Why is GPT generating code that results in simple compiler errors (that is why GPT probably shouldn't be used to generate any code / replace devs for real projects yet), and
B.) error messages are (just guessing here) probably <1% of the actual errors in most codebases.
I personally know of a few large companies laying off devs over this.
IMO, the tech debt we're going to see in 6 months will probably be huge. Good now to start a staffing agency of human experts who can come in and fix this type of problem (extricating massive amounts of code generated by GPT without starting from scratch) because there will be a bunch of fires to put out and those fires will be worth $
They’re laying people off and replacing them with chat gpt generating code? That seems... aggressive. Or are they laying off devs who copy-pasted gpt-generate code?
Nah they deserve to eat shit and the staffing agencies hired to fix the bad AI code will undoubtedly be people abroad who barely speak English and will only tangle it up worse. I would actually pay to be a fly on the wall in those meetings listening to people lose their minds in frustration.
Neither of those three would function nor would they throw an error. Prompts to correct itself would not improve things.
So I did the natural thing and started to debug myself. At which point, I couldn’t help but ask myself why I was debugging machine generated code when I could not be lazy and actually build it from first principles.