> AIs are sometimes good for code, sometimes not so good. I put some money in but don't know what code will come out, and it's so easy to put more money in. My friend told me to use Windsurf instead of Cursor, and it's easy to switch IDEs, they have no moat. Where's the moat?
AFAICT, that's the whole article.
This has often drawn similarities to a slot machine, the way it is not deterministic what code will come out, and what changes to which files will be made.
For example, the mentioned graph has "initial prompt with iterative tweaks", followed by iterations of 'starting from scratch'. -- I don't understand why you'd think "this is an ineffective way of doing things", and then keep doing it.
Describing LLMs as "slot machines" seems like the author has no curiosity about the shape of what LLMs can/can't do.
Answer is useful as is, and needS to be factually correct: Bad
On the other hand, I think MS are pretty much cloning the good stuff from Cursor, like agent mode: https://www.youtube.com/watch?v=dutyOc_cAEU
Leaves me thinking VS Code might be the best in the long long run, but for now they're all kinda similar?
Isn't the situation similar to Brave et al built on top of Chromium but supports Chrome extensions?
Maybe Cursor/Windsurf could've just been plugins? Only Zed.ai seems really different of the popular IDEs
In agent mode (need to enable it manually in settings), did anyone test forcing Copilot to run unit tests after code changes and fix code to pass tests if they break?
My issue was my instructions.md file telling it to think too much before writting files and running tests. So it was in a rabbit hole of ethernal thinking.
Now I can tell it to create crud pages and it will generate and run tests for those pages as well.
I'm looking for solutions. Docker looks like one but I like to keep things simple.