I'm glad this has been working for you -- generally any time I actually have a really difficult problem, ChatGPT just makes up the API I wish existed. Then when I bring it up to ChatGPT, it just apologizes and invents new API.
That means LLMs are great for scaffolding, prototypes, the v0.1 of new code especially when it's very ordinary logic but using a language or library you're not 100% up to speed on.
One project I was on recently was translation: converting a JS library into Kotlin. In-editor AI code completion made this really quick: I pasted a snippet of JS for translation in a comment, and the AI completed the Kotlin version. It was frequently not quite right, but it was way faster than without. In particular, when there was repeated blocks of code for different cases that different only slightly, once I got the first block correct, the LLM picked up on the pattern in-context and applied it correctly for the remaining blocks. Even when it's wrong, if it has an opportunity to learn locally, it can do so.
The sweet spot is when you need something and you’re sure it’s possible but just don’t know (or it’s too time consuming) how. E.G. change the css to to X, rewrite this python code in typescript, use the pattern of this code to do Y, etc.
Reminds me of the early days of Google where you had to learn how write a good search query. You learn you need more than a word or two, but don’t write a whole essay, etc.
I think that you have a serious misunderstanding of the capabilities of LLMs - they cannot reason out relationships among documents that easily. They cannot even tell you what they don't know to finish a given task (and I'm not just talking one-shot here, agent frameworks suffer from the same problem).
You need to do some serious RHLF to get something good out of LLMs
Now if you throw larger context or more obscure interface expectations at it, it'll start to discard code and hallucinate.
If so called „prompt engineering“ goes so far that only one solution remains, you don‘t need the LLM.
- Getting over the blank canvas hurdle, this is great for kick starting a small project and even if the code isn't amazing, it gets my brain to the "start writing code and thinking about algo/data-structures/interesting-problem" rather than being held up at the "Where to begin?" Metaphorically where to place my first stroke, this helps somewhat.
- Sometimes LLM has helped when stuck on issues but this is hit and miss, more specifically it will often show a solution that jogs my brain and gets me there, "oh yeah of course" however I've noticed I'm more in than state when tired and need sleep, so the LLM might let me push a bit longer making up for tired brain. However this is more harmful to be honest without the LLM I go to sleep and then magically like brains do solve 4 hours of issues in 20 minutes after waking up.
So LLM might be helping in ways that actually indicate you should sleep as brain is slooooowwwwing down
- Getting over the blank canvas hurdle, this is great for kick starting a small project and even if the code isn't amazing, it gets my brain to the "start writing code and thinking about algo/data-structures/interesting-problem" rather than being held up at the "Where to begin?" Metaphorically where to place my first stroke, this helps somewhat.
- Sometimes LLM has helped when stuck on issues but this is hit and miss, more specifically it will often show a solution that jogs my brain and gets me there, "oh yeah of course" however I've noticed I'm more in than state when tired and need sleep, so the LLM might let me push a bit longer making up for tired brain. However this is more harmful to be honest without the LLM I go to sleep and then magically like brains do solve 4 hours of issues in 20 minutes after waking up.
So LLM might be helping in ways that actually indicate you should sleep as brain is slooooowwwwing down
My experience has been similar, it is amazing for stuff I am a beginner at but kinda useless for my actual work. It was invaluable today when I was trying to grasp CA zoning laws, but its almost useless for coding.
This also points to why it will never (imo) be "intelligent". It will never be able to take all its knowledge and use that to solve a problem it doesn't have training data for.
It'll generate a bunch of queries to Google (well, "to Bing" I guess in that case) based on your question, read the results for you, base its answer on the results and provide you with sources that you can check if it used anything from that webpage.
I only use ChatGPT for documentation when I have no idea where I'm going at all, and I need a lay of the land on best practices and the way forward.
For specifics, Bing Copilot. Essentially a true semantic web search
It means that you are working on something that no one else has ever done before...
...or you aren't able to describe your problem correctly.