almost all the tools i've used to date for designing frontend framework, none really replaces using cursor and being able to dive deep, however cline does seem to have gotten significantly better.
the day where you can come back to a fully working web app with moderate complexity after cleaning the gutter is still some way off but thats the dream
Some other things I picked up:
- If you formulate a good prompt with small (but sufficient) context and it still makes mistakes after one attempt to feed the error message back to it, it's probably not going to be able to ever get it, no matter how many iterations you do with it. It will get stuck in a rut forever. Better not to argue with it.
- o1-2024-12-17 is genuinely a big step change.
It isn’t foolproof but it has a much better success rate for me than letting it spin.
It's like having a senior software dev over your shoulder. He knows pretty much everything about coding but he often comes to work drunk...
And that was the best analogy I could come up with: I think it's sped up my work enormously because I limit what it does, rather than let it loose... if that makes sense.
As an example, I was working on a C# project with a repository layer with a bunch of methods that just retrieved one or two values from the database, e.g. GetUsernameFromUserGuid, GetFirstnameFromUserGuid and so on. They each had a SQL query in them (I don't use ORM's...).
They weren't hard to write but there were quite a few of them.
Copilot learned after the first couple what I was doing so I only needed to type in "GetEmail" and it finished it off, (GetEmailAddressFromUserGuid) did the rest, including the SQL query and the style I used etc.
To me, that's where it shines!
Once you figure out where it works best and its limits, it's brilliant imo.
Yes, it's good at boilerplate. But I've spent a long time getting good at vim macros and I'm also very good at generating boilerplate with a tiny number of keystrokes, quickly and without leaving my editor
...
or I could type a paragraph to an LLM and copy paste and then edit the parts it gets wrong? also I have to pay per token to do that?
no..
You have to expend a mental effort to think about your solutions anyway; I guess it’s pick your poison really.
If you consider most of the questions may not be actually "new questions" and have answers already, sometimes if it's important enough it's worth actually putting the effort in to understand the problem and solve it yourself. The over dependance that people are developing on LLMs is a little concerning.
If you use chat it's work, and then you have to debug and understand something you didn't write yourself.
Using inline suggestions is the closest I have come to plugging my brain directly into the computer. I type 5 characters and in 80% of cases the suggestion is character by character exactly what I would have written. It speeds me up enormously.
> In particular, I asked ChatGPT to write a function by knowing precisely how I would have implemented it. This is crucial since without knowing the expected result and what every line does, I might end up with a wrong implementation.
In my eyes, it makes the whole idea of AI coding moot. If I need to explain every step in detail - and it does not "understand" what it's doing; I can virtually the statistical trial-and-error behind its action - then what's the point? I might as well write it all myself and be a bit more sure the code ends up how I like it.
link: https://www.linkedin.com/feed/update/urn:li:activity:7289241...
But this is the kind of thing a LLM excels at. It gives you 200 lines of impl right away, you have a good understanding of both what it should look like and how it should work.
Slow and error prone to type but, but quick and easy to verify once done, that's the key use case for me.
Seriously, I've been using LLMs for coding for a while and can say early experience was disappointing, but they get better and better fast. The latest o1 looks a lot better than 4o. It's reasonable to expect with proper human supervision and interface they will be able to handle big files and projects in a year or two. Interesting times...
I can confidently say the way I code today is completely different from the way I was coding in late 2022 and it changed a couple times in between then and now, too.
Hmm... I can try... OAI has something called 'projects', but local handling with API calls is probably the right way of doing it. Easier to switch providers, run and debug in place. With current prices it should be like < $10/month.
(eg. Github Copilot for PR reviews, etc.)
You will of course need an insanely beefy and expensive machine to run any useful models at reasonable speeds, which would likely cover API usage costs for many, many years (an entire lifetime, likely).
I use Zed’s AI assistant with Sonnet, and will generally give it 10-20k tokens of sample code from elsewhere in the codebase, shared libraries, database schema, etc. and more or less have a very specific expectation of exactly the code I want to get. More often than not, it will succeed I’ll get it faster than typing myself.
However, it’s also pretty good at poking holes in your design, coming up with edge cases, etc. Sure, most of its observations will likely be moot somehow, but if it lists 10 points, then even if only 2 are valid and I didn’t think of, it’s already valuable to me.
I’ve also used Cline a bit, it’s nice too, though most of the time a single run of Claude works just fine, and I like Zed’s AI Assistant UX (I actually don’t use it for coding other than that).
Like, all told? The whole bit where you need to find code, paste it in, find more code, paste it in, prompt a good question and most likely iterate on it, for an answer you say you had already expected, is faster than typing it out?
I don’t understand.
My experience is that it can still be tricky to get high quality results when letting the AI actually edit the code for you. A few of my attempts went rather poorly. I’m hoping tweaks to how I use the tools improve this. Or I’ll just wait until better versions are released :)
I use Claude in the web interface quite often though. It’s very helpful for certain queries. And I can usually abort quickly when it gets lost or starts hallucinating.
Edit: saw you said "paste it in". Same thing there. You either just include the whole file or select the code and press "Include". You can also let the editor handle the inclusion itself based on your prompt. It will then try to find the relevant files to include.
(asking Chatgpt after getting a very cut together looking example-ish example):
Me: You simply read various examples from the D3D12 documentation and mixed them together without really understanding them? Admit it! :D
ChatGPT: Haha, I admit it, that was a bit of a ‘best of DirectX 12 documentary’! But hey, I tried to build you a solid base that covers both window handling and the basics of DirectX 12.