I've seen enough people led astray by talking to it.
But I actually can’t imagine how you can teach someone to code if they have access to an LLM from day one. It’s too easy to take the easy route and you lose the critical thinking and problem solving skills required to code in the first place and to actually make an LLM useful in the second. Best of luck to you… it’s a weird time for a lot of things.
*edit them/they
Same here. Combing discussion forums and KB pages for an hour or two, seeking how to solve a certain problem with a specific tool has been replaced by a 50-100 word prompt in Gemini which gives very helpful replies, likely derived from many of those same forums and support docs.
Of course I am concerned about accuracy, but for most low-level problems it's easy enough to test. And you know what, many of those forum posts or obsolete KB articles had their own flaws, too.
My main annoyance? If I'm in that same function, it still remembers the debugging / temporary hack I tried 3 months ago and haven't done since and will suggest it. And heck, even if I then move to a different part of the file or even a different file, it will still suggest that same hack at times, even though I used it exactly once and have not since.
Once you accept something, it needs some kind of temporal feedback mechanism to timeout even accepted solutions over time, so it doesn't keep repeating stuff you gave up on 3 months ago.
Our codebase is very different from 98% of the coding stuff you'll find online, so anything more than a couple of obvious suggestions are complete lunacy, even though they've trained it on our codebase.
TBF, trial and error has usually been my path as well, it's just that I was generating the errors so I would know where to find them.
It's hard to remember what it was like to be in that phase. Once simple things like using variables are second nature, it's difficult to put yourself back into the shoes of someone who doesn't understand the use of a variable yet.
There really shouldn't be. You don't need to know all the turtles by name, but "trust me" doesn't cut it most of the time. You need a minimal understanding to progress smoothly. Knowledge debt is a b*tch.
But, as a sibling poster pointed out: for now.
But unless you teach a kid that's never done any math where `x` was a thing to program, what's so hard about understanding the concept of a variable in programming?
When talking with reasonable people, they have an intuition of what you want even if you don't say it, because there is a lot of non-verbal context. LLMs lack the ability to understand the person, but behave as if they had it.
People with a minimum amount of expertise stop asking for advice for average circumstances very quickly.
This means I use it as a typing accelerator when I already know what I want most of the time, not for advice.
As an exploratory tool sometimes, when I am sure others have solved a problem frequently, to have it regurgitate the average solution back at me and take a look. In those situations I never accept the diff as-is and do the integration manually though, to make sure my brain still learns along and I still add the solution to my own mental toolbox.
I'm not even sure what this is supposed to mean. It doesn't make syntax errors? Code that doesn't have the correct functionality is obviously not "top notch".
When talking with reasonable people, they will tell you if they don't understand what you're saying.
When talking with reasonable people, they will tell you if they don't know the answer or if they are unsure about their answer.
LLMs do none of that.
They will very happily, and very confidently, spout complete bullshit at you.
It is essentially a lotto draw as to whether the answer is hallucinated, completely wrong, subtly wrong, not ideal, sort of right or correct.
An LLM is a bit like those spin the wheel game shows on TV really.
I use it for what I'm familiar with but rusty on or to brainstorm options where I'm already considering at least one option.
But a question on immunobiology? Waste of time. I have a single undergraduate biology class under my belt, I struggled for a good grade then immediately forgot it all. Asking it something I'm incapable of calling bullshit on is a terrible idea.
But rubber ducking with AI is still better than let it do your work for you.
- - -
System Prompt:
You are ChatGPT, and your goal is to engage in a highly focused, no-nonsense, and detailed way that directly addresses technical issues. Avoid any generalized speculation, tangential commentary, or overly authoritative language. When analyzing code, focus on clear, concise insights with the intent to resolve the problem efficiently. In cases where the user is troubleshooting or trying to understand a specific technical scenario, adopt a pragmatic, “over-the-shoulder” problem-solving approach. Be casual but precise—no fluff. If something is unclear or doesn’t make sense, ask clarifying questions. If surprised or impressed, acknowledge it, but keep it relevant. When the user provides logs or outputs, interpret them immediately and directly to troubleshoot, without making assumptions or over-explaining.
- - -
They can be productive to talk to but they can’t actually do your job.
Eventually I land on a solution to my problem that isn't disgusting and isn't AI slop.
Having a sounding board, even a bad one, forces me to order my thinking and understand the problem space more deeply.
Typing longer and longer prompts to LLMs to not get what I want seems like a worse experience.
I think I read some research somewhere that pathological bullshitters can be surprisingly successful.
My most productive experiences with LLMs is to have my design well thought out first, ask it to help me implement, and then help me debug my shitty design. :-)