I've been using Cursor extensively these past few months, for anything ranging from scaffolding to complex UIs. The trick, I've found, is to treat the AI like I would work with a junior engineer, giving in concrete detailed tasks to accomplish, breaking the problem down myself into manageable chunks. Here are two examples of little word games I've made, each of them took all in all a couple of days to ideate, design and build.
https://7x7.game You're given a grid and you need to make as many words as possible, you can only use the letters in the bottom row. There's complex state management, undo, persistent stats, light/dark modes, animations. About 80-90% of the code was generated and then manually tweaked/refactored.
https://vwls.game Given 4 consonants, you have to generate as many words as possible. This is heavily inspired by Spelling Bee, but with a slightly different game mechanic. One of the challenges was that not all "valid" words are fun, there are a lot of obscure/technical/obsolete words in the dictionary, I used Claude's batch API to filter down the dictionary to words that are commonly known. I then used cursor to generate the code for the UI, with some manual refactoring.
In both cases, having the AI generate the code enabled me to focus on designing the games, both visually and from an interaction perspective. I also chose to manually code some parts myself, because these were fun.
At the end of the day, tools are tools, you can use them however you like, you just need to figure out how they fit in your workflow.
Use Composer notebooks to keep a growing markdown document of context you want future versions to remember.
My Cursor today is much better than my Cursor on day 1.
…so not programming?
AI is awesome for solving issues, asking it questions about code, asking for possible solutions. But maybe I'm just fast at writing code that actually solves the problem, so I don't need an AI to code for me.
Cody's autocomplete used to work really well for me. Then they switched to DeepSeek. Now I regularly get suggestions that are irrelevant, incomplete, and contain syntax errors.
I'm not sure what it's like these days but I had a similar experience with Copilot a while back.
I wonder if good autocomplete is just too expensive.
Other than that, having chat with o1 and sonnet inside the editor is pretty good ngl
I personally find writing tests to be soul-crushing, boring, work. I never really learned it properly, and when I have a well-documented function, CGPT typically does a decent job making a rough draft. I often have to work on the test function, fix some things, but the final product is way better than the PoS I would have put together: my guess is it has saved me hundreds of hours. I have developed a decent understanding of fixtures, mocking, sharing fixtures across modules, etc, all with the help of ChatGPT. It "understands" my project and how it is organized, and makes suggestions based on this understanding. Yes, it sometimes gets stuck in local minima and I have to kick it out, which can be frustrating. But even that is a learning process, as I often go to SO or other people's code bases to find good examples, and feed them to ChatGPT to get it unstuck.
It's like the ultimate rubber duck paired programming partner. I tell it what I'm working on, and that's intrinsically helpful. But the rubber duck has really good feedback, because it has read the entire internet.
It's made writing tests for my code fun, for the first time ever.
The people I know personally who refuse to use CGPT are typically very good software developers, somewhat arrogant and have a chip on their shoulders, and honestly I think in 20 years we'll look back at them like people who thought the internet was a passing phase in the mid 1990s. I also think many of them don't understand how LLMs work, and how powerful they can be when prompted correctly
I find it interesting that when people describe to me how they use LLMs to write code it's either short throwaway scripts or to write the kind of code that would make me retch (e.g. tests stuffed full of horrible mocks, spaghetti boilerplate).
In this case it is the opposite, the best ML/software engineers today think this is a passing phase. It's the general population and business people who are claiming it to be revolutionary.
Only time will tell though
The pushback I see is from people who were raised to write everything from scratch, who don't trust the output of LLMS because of "hallucinations" or other crappy outputs. The problem is, the people making these claims are really out of touch with prompt engineering, and how students are currently learning to code with AI in-the-loop (and for basic coding and testing etc for common libraries, LLMs are really, really good at explaining things and writing entry-level code and tests -- this is not arguable: people that are fighting this are graybeards that haven't learned to code at a basic-to-intermediate level in a long time).
A good software developer, with a nose for code smells, will not just accept any old code an LLM produces, you have to use it intelligently, push back on bizarre constructions. Hence, for me, who hates writing tests, it is an amazing tool. If I had an intern or an undergrad who loved grindout out tests I'd use them, but that's basically my LLM at this point (and for the "but ackshually" guys yes obviously you can't use them mindlessly, we are writing code not drawing doodles).
There are thousands of weather apps in the App Store, but none display rain data exactly the way I’d like to see it. That’s why I’ve long considered writing my own home screen widget to show it exactly as I want.
I hadn’t developed iPhone apps in a few years, so I had no experience with SwiftUI, the Swift Graph framework, or creating widgets. Just two years ago, building an app with a widget from scratch would have taken me a week — to read tutorials, navigate the necessary documentation, get started and solve my beginner bugs. Because of that time investment, I always hesitated to even begin.
Now, I’ve created exactly what I wanted in a single afternoon after work, with the help of AI. To be honest, GitHub Copilot isn’t very helpful for this, though it does speed up repetitive typing. However, using ChatGPT to scaffold the graph code—with me tweaking the parameters—made the process much faster. Since they added search functionality, there’s minimal "hallucination" of APIs, allowing for quick iterations and bringing back that “joy of programming” feeling.
When you program for a hobby, you oftentimes seek to enjoy the route as much or more than reaching the destination. Copilot would be a distraction and an annoyance in this case - unless you're genuinely stuck and then you can use Copilot as a mentor.
It all depends on your context and what you're trying to do.
I've had an absolutely magical experience with copilot though. I honestly find it a bit strange when others say it has just been bad for them
I imagine scenarios where AI could be given complete authority to decide who is hired/fired, who gets medical care, who gets food, who gets utilities (water/electricity/natural gas) to their homes, who gets disaster relief, etc. Quite frightening when you think about it. If AI decided to cancel you (and it had this level of authority) your very existence would be in danger.
I have a hobby with natural photos, and in the last 2 years, I have stopped spending time browsing pages as most of them are garbages generated by AI.
One thing many people hype about generative AI as they believe human also make mistake the same way as AI do, so at worst you have something similar to human mistake. Yes, but the volumes generated by AI are at magnitude bigger, and only a limited amount of human can validate and filter it out. If there is nothing change, this volumes of garbages will surely overflow and there is no way we can differentiate bad and good, fake and authentic contents.
We'll lift millions in the "global south" out of poverty by providing the tools to criminals and foreign adversaries that drive demand for cheaply-staffed high-rep social media account farms.
What a time to be alive. What frontiers we are exploring.
Why practice writing if generative ML can create a poem or short story for you?
In that regard you're best off just sitting down in front of a screen and consuming content generated by ML.
And from the example in the original blog post the author had the generative ML do the fun stuff in solving the problem and all they did was drudge work cleaning it up and submitting it. Very productive from the company perspective but reminds me a lot of low thought factory processes.
Do you really think the authoritarian elites will let the unwashed masses with no income do whatever the hell they want? Can you really say that after COVID lockdowns demonstrated their true colors?
None of the other capitalism-alternatives have historically afforded the kind of luxury you're suggesting either. Quite the opposite, in fact.
When someone figured that space can bring profit too, we got some developments in space travel as well.
The way I found most success using LLMs is as a partner to ping-pong ideas, to come up with code design, algorithms, and data structures that would fit a particular scenario. Then I'm ignoring its code and writing it to fit the project. The trick is to use the randomness combined with the vast array of information it holds to your advantage - like a supercharged Google.
Regarding my joy of programming, for me it's not even close. I get my joy from the project as a whole, not from snippets of code sprinkled around (sometimes I wish it could - I have hundreds of projects I would like to tackle but they're not worth my time). The only thing I worry about is that the next version would not be accessible to the public or they would cost exorbitant amounts.
edit: for the way I'm using LLMs I found the approach taken by Zed editor to be the best, really recommend it's buffer, easy to copy-p, modify and search (it would be nice to also have divergence from a chat, hopefully in the future)
ai is a junior engineer you as a senior engineer can coach
the end:
the ai is a senior engineer with a half finished problem you can polish as a junior engineer
My experience is that like so much else there's an expiry date on the joyful coding.
I gave it another chance with AI but AI is too incompetent, it's more of a creative intern that does badly speed reports than a competent replacement for painstakingly reading documentations and googling.