I could probably go much lower, and find a model that is dirt cheap but takes a while; but right now the cutting edge (for my own work) is Claude 4 (non-max / non-thinking). To me it feels like Cursor must be hemorrhaging money. The thing that works for me is that I am able to justify those costs working on my own services, that has some customers, and each added feature gives me almost immediate return on investment. But to me it feels like the current rates that cursor charges are not rooted in reality.
Quickly checking Cursor for the past 4 day period:
Requests: 1049
Lines of Agent Edits: 301k
Tabs accepted: 84
Personally, I have very little complaints or issues with cursor. Only a growing wish list of more features and functionality. Like how cool would it be if asynchronous requests would work? Rather than just waiting for a single request to complete on 10 files, why can't it work on those 10 files in parralel at the same time? Because now so much time is spend waiting for the request to complete (while I work on another part of the app in a different workspace with Cursor).
They don't make any money. They are burning VC money. Anthropic and OpenAI are probably also not making moeny, but Cursor is making "more no money" than others.
It's like a horse race.
But yeah enjoy the subsidies. It's like the cheap Ubers of yesteryear.
Switching costs are zero and software folks are keen to try new things.
You can open up to three parallel chat tabs by pressing Cmd+T
Each chat tab is a full Agent by itself!
What does this measurement mean?
1049 / (4 * 8) ~= 32 seconds, on average. Doesn't look like much waiting to me.
The problem with generative ai workloads: The costs rise linerly with the number of requests, because you need to compute every query.
Both are genuine questions.
Also another issue I am starting to see is the lack of shared MCP servers. If I have VSCode, Cursor, and Claude open, each one is running its own instance of the MCP server. You can imagine that with a dozen or so MCP's, the memory footprint becomes quite large for no reason.
I don’t think the future of agentic software development is in an IDE. Claude Code gives me power to orchestrate - the UX has nothing to do with terminal; it just turns out an agent that lives on the OS and in the filesystem is a powerful thing.
Anthropic can and will evolve Claude Code at a pace cursor cannot evolve IDE abstractions. And then yea - they are designing the perfect wrapper because they are also designing the model.
Long bet is Claude Code becomes more of an OS.
Cursor is essentially only the wrapper for running agents. I still do my heavy lifting in Jetbrains products.
It actually works out well because I can let Cursor iterate on a task while I review/tweak code.
Knowing what tools are better for what really helps.
https://github.com/tuananh/hyper-mcp
it's a MCP server with WASM plugin system, packaged, signed & published via OCI registry.
The one issue I've run into is that the VSCode version Cursor uses is several months old, so we're stuck using older extensions until they update.
This is good user feedback. If Cursor is "Claude + VSCode", why do you need the other 2 open?
i wrote a MCP with plugin system where you only need to run 1 instance and add plugins via config file.
We recently launched Zapier MCP, we host the servers for you: https://zapier.com/mcp
STDIO MCP is really just a quick hack.
I’m particularly interested in the release of BugBot. The docs mention it looks at diffs but I hope it’s also scanning through the repository and utilizing full context. Requesting copilot to do a review does the same thing but because it’s only looking at diffs the feedback it provides is pretty useless, mainly just things that a linter could catch.
And every time I find it having diverged further from VSCode compatibility.
This wouldn’t be so bad if it was an intentional design choice but it seems more that Microsoft is starting to push them out? Like MS Dev Containers plugin is still recommended by some leftover internal, but if you install it you get pushed to a flow that auto uninstalls it and installs Remote Containers by Anysphere (which works differently and lacks support for some features). And I end up rebuilding my Dev Container once more… I also noticed recent extensions such as the Postgres one from MS also doesn’t exist.
It feels almost as if VSCode is not adding new features and is in maintenance mode for now. I have no idea if that's actually true, but if this continues, a fork will be easily maintainable.
where is the splashy overproduced video? where is the promises of agi? where is the "we are just getting started" / "we cant wait to see what you'll build"? how do i know what to think if you aren't going to tell me what to think?
edit: oh haha https://x.com/cursor_ai/status/1930358111677886677
When reviewing the changes made from agent mode, I don’t know why the model made the change or whether the model even made the change versus a tool call making the change. It’s a pain to go fish out the reason from a long response the model gives in the chat.
Example: I recently asked a model to set up shadcn for a project, but while trying to debug why things looked pretty broken, I had sift through a bunch of changes that looked like nasty hallucinations and separate those from actual command line changes that came from shadcn's CLI the model called. Ended up having to just do things the old fashioned way to set things up, reading the fine manual and using my brain (I almost forgot I had one)
It would be nice if above every line of code, there’s a clear indication of whether it came from a model and why the model made the change. Like a code comment, but without littering the code with actual comments
Hand written code needs to be distinguishable and considered at a higher priority for future code generation context
When Ampcode took it all away from me, I found I enjoyed the actual AI-assisted coding much more than configuring. Of course, largely because it just worked. Granted, I had enough experience with other AI tools to manage my expectations.
In my experience, next edit is a significant net positive.
It fixes my typos and predicts next things I want to do in other lines of the same file.
For example, if I fix a variable scope from a loop, it automatically scans for similar mistakes nearby and suggests. Editing multiple array values is also intuitive. It will also learn and suggest formatting prefences and other things such as API changes.
Sure, sometimes it suggests things I don't want but on average it is productive to me.
Its so painful - the model never knows the directory in which it is supposed to be and goes on a wild goose chase of searching in the wrong repo. I have to keep guiding it to the right repo. Anyone here has had success with such a setup?
Keep it short. It's enough for it to realize it needs to navigate directories.
Developers use Vim, JetBrains, Emacs, VSCode, and many other tools—what makes you think they’ll switch to your fork?
I think this is an artifact of Cursor being a closed-source fork of an open-source project, with a plugin architecture that's heavily reliant on the IDE at least being source-available. And, frankly, taking an open-source project like VsCode and commercializing it without even making it source-available is a dishonorable thing to do, and I'm rooting against them.
Overall, I am having hard time with code autocompletion in IDE. I am using Claude desktop to search for information and bounce off ideas, but having it directly in IDE – I find it too disturbing.
Also there is this whole ordeal with VSCode Marketplace no longer available in Cursor.
I'm not saying AI in IDE is bad, it's just I personally can't get into it to actually feel more productive.
Also, Trae being $10 for more requests makes Cursor far less appealing to me.
Rust is easier for me than shell scripting so I started writing what I needed and remembered Zed added agent mode. I decided to give it a shot. I had it use Claude 4 with my api tokens.
It wrote the entire program, tested it, debugged it. It made some bad assumptions and I just steered it towards what I needed. By the end of about an hour, I had my complete fix plus an entire ZFS management layer in Rust.
It did cost $11, but that is a drop in the bucket for time saved. I was impressed.
Just sharing this because I got real and measured value recently that is way beyond the widely shared experience.
As an aside: $11 for an hour of Claude 4 seems expensive? I’ve been using Claude 4 (through a Zed Pro subscription, not with my own keys) and I haven’t hit the subscription limits yet. Were you using burn mode, or?
Cursor—the fastest-growing AI code editor in the world, reaching $300 million in annual recurring revenue just two years after its launchIf you sell $1.00 USD for $0.90 you can get nearly unlimited revenue (until you run out of cash).
* BYO model or not
* CLI, UI, VSC-plugin or web
* async/sync
* MCP support
* context size
* indexed or live grep-style search
There's probably like 10 more.
Does anyone know if it's GitHub-only or can it be used as a CLI (i.e., Aider replacement)?
Recently, there was a post with detailed evidence suggesting Cursor was intentionally throttling requests [1], including reverse engineering and reproducible behaviors. The team initially responded with a "happy to follow up", but later removed their replies that got downvoted, and banned the OP from posting further updates.
Their response sounded AI-generated too, which wasn't very surprising based on the way they handle customer support [2]. I wish they were more open to criticism instead of only claiming to be transparent.
[1] https://www.reddit.com/r/cursor/comments/1kqj7n3/cursor_inte...
You can commit checkpoints prior to each major prompt and use any IDE’s builtin visual diff versus last commit. Then just rebase when the task is done
Im super happy with it. I’m not sure how it compares to other coding agents though.
Also had a few misses. But in general it is ok. Still prefer ai assistant, because i can then direct the result into a certain direction. It also feels faster, it probably is not because of the manual stuff involved.
It has showed promise, enough to quell my FOMO about other IDEs, since I am extremely happy with the Jetbrains suite otherwise.
I believe it’ll get much better as LLMs start editing code by invoking refactoring tools (rename, change signature, etc.) rather than rewriting the code line by line, as this will let them perform large-scale changes reliably in a way that's similar to how software engineers do them now using IDE tools
Evidently not
It's also missing quite a few features still (like checkpoints, stopping the agent mid-task to redirect, etc). But the core feature set is mostly there.
Do it. I've started editing with Zed and just keeping Cursor/Intellij open on the side. (Cursor b/c of the the free student plan, Intellij for school assignments).
I feel spoiled by the performance, especially on promotion displays. I've started noticing some dropped frames in Cursor and measured an avg of 45-60 fps in Intellij (which is somewhat expected for such a huge IDE). I basically exclusively write in Zed, and do everything else in their respective apps.
That said, I do continue to think that agents are in this weird zone where it's more natural to want to interact through ticketing layer, but you kind of want to editor layer for the final 5%.
Because we're developers with things to build and we don't have time to play with every AI tool backed by the same LLM.
Like it or not, we're hitting the slope of enlightenment and some of us are ready to be done with the churn for a while.
It's just like VSC, which I was using, but it has these magical abilities. I was using it within a minute of downloading it. Unlike Cline, I guess, whatever that is.
You can use the full-context if you prefer that cost/speed tradeoff! Just have to turn on Max Mode.
Cline is great for many users, but a bit of a different product. Lots of Cursor's value come from custom models that run in the background (e.g. Tab, models that gather context, etc.).
There's still a ton of low hanging fruit that other Copilot-style autocomplete products don't seem to be picking up, like using clipboard contents, identifying the next place in the file to jump to, etc.
I primarily save time coding with AI with autocomplete, followed by chat, with agentic flows a very distant 3rd, so Cursor is a legitimately better product for me.
I’m also curious how this compares to OpenAI’s Codex. In my experience, running agents locally has worked better for large or complex codebases, especially since setting up the environment correctly can be tricky in those setups.
"Cursor works with any programming language. We’ve explicitely worked to improve the performance of our custom models — Tab included — on important but less popular languages like Rust, C++, and CUDA."
Hundreds of languages supported: https://code.visualstudio.com/docs/languages/overview
video demo here, https://x.com/ob12er/status/1930439669130637482?s=46&t=2jNrj...
Cursor a lot of respect from our dev team if todays slack messages are anything to go by
Will be adding the Add to cursor button to Glama later today today.
If anyone from Cursor is reading this, we are rolling out MCP server usage analytics where we aggregate (anonymous) usage data across several providers. Would be amazing to include Cursor (reach me at frank@glama.ai). The data will be used to help the community discover the most used (and therefore useful) clients and servers.
C:\projects\my_project>q^D^C 'q' is not recognized as an internal or external command, operable program or batch file.
1.0 my ass.
* Are you using Cursor as your main editor?
* For more senior engineers -- is it really that much of a productivity boost?
* Is the primary use of an AI editor like Cursor to handle more tedious tasks? Ex: generate an array with all the months, write some unit tests for me for some simple component, write some boilerplate code for me, etc.
Maybe I'm being overcautious, but one of the worst things (for me) that came from the AI rush of these past years is this feeling that everything is full of bots. I know that people have preferences, but I feel that I cannot trust anymore that a specific review was really made by a human. I know that this is not something new, but LLMs take it to the next level for me.
'Connection failed. If the problem persists, please check your internet connection or VPN'
I've contacted support and they have been no help. You can see tons of people having this issue in user forums. Meanwhile, bypassing the giant monstrosity that is VScode (and then a Cursor as a fork on top of it) gives me no such issues.
So I wouldn't be so dismissive that anyone frustrated with Cursor is a bot.
Not GP, but my suspicions are actually of the other end of the spectrum - i.e., it's the glowing reviews of AI things that make my bot-sense tingle.
Though I usually settle on the idea that they (the reviewers) are using LLMs to write/refine their reviews.
You should just set aside some time to try out different tools and see if you agree there's an improvement.
For trying models, OpenRouter is a big time saver.
I hope that I am wrong, but, if I am not, then these companies are doing real and substantial damage to the internet. The loss of trust will be very hard to undo.
- Burning tokens with constant incorrect command-line calls to read lines (which it eventually gets right but seemingly needs to self-correct 3+ times for most read calls)
- Writing the string "EOF" to the end of the file it's appending to with cat
- Writing "\!=" instead of "!="
- Charged me $7 to write like 23 lines (admittedly my fault since I forgot I kept "/model opus" on)
Minus the bizarre invalid characters I have to erase, the code in the final output was always correct, but definitely not impressive since I've never seen Cursor do things like that.
Otherwise, the agent behavior basically seems the same as Cursor's agent mode, to me.
I know the $7 for a single function thing would be resolved if I buy the $100/month flat fee plan, but I'm really not sure if I want to.
For more than a year, Anthropic has engaged in an extensive guerrilla marketing effort on Reddit and similar developer-oriented platforms, aiming to persuade users that Claude significantly outperforms competitors in programming tasks, even though nearly all benchmarks indicate otherwise.
I have been using the Claude.ai interface in the past and have switched to Aider with Anthropic API. I really liked Claude.ai but using Aider is a much better dev experience. Is Claude Code even better?
I still prefer Cursor for some things - namely UI updates or quick fixes and explanations. For everything else Claude Code is superior.
What's the best about it, it's open source, costs nothing, and is much more flexible than any other tools. You can use any model you want, either combine different models from different vendors for different tasks.
Currently, I use it with deepseek-r1-0528 for /architect and deepseek-v3-0325 for /code mode. It's better than Claude Code, and costs only a fragment of it.
Once something, like in this case AI, becomes a commodity, open source beats every competition.
At least Cursor is affordable to any developer. Because most of the time, even if it’s totally normal, companies act like they’re doing you a favor when they pay for your IDE so most people aren’t going to ask an AI subscription anytime soon.
I mean, it will probably come but not today.
am i missing that much ?
Traditional code editing -> autocomplete -> file editing -> agent mode
This is basically a gradient of AI output sizes. Initially, with the ability to generate small snippets (autocomplete), and moving up to larger and larger edits across the codebase.
Cursor represents the initial step of AI-assisted traditional coding... but agent mode is reliable now, and can be directed fairly consistently to produce decent output, even in monorepos (IME). Once the output is produced by the agent, Ive found I prefer minimal to no AI for refining it and cleaning it up.
The development techniques are different. In agent mode, there's far more focus on structuring the project, context, and prompts.. which doesn't happen as much in the ai-autocomplete development flow. Once this process shift happened in my workflow, the autocomplete became virtually unused.
So I think this shift toward larger outputs favors agent-focused tools like CC, Aider, Cline, and RooCode (my personal favorite).. over more traditional interfaces with ai-assistance.
Now I've changed my technical planning phase to write in a prompt-friendly way, so I can get AI to bootstrap, structure, boilerplate and usually also do the database setup and service layer, so I can jump right into actually writing the granular logic.
It doesn't save me planning or logic overhead, but it does give me far more momentum at the start of a project, which is a massive win.
The agent stuff is largely useless. The tab prediction go nuts every few seconds completely disrupting flow.
This is my main gripe with it, too. It's still been semi-useful at least for some analysis and examination of our code-base, but editing and autocomplete I've not found super useful yet.
What about Gitlab instead of GitHub, is there an equivalent to cursor 1.0 product?
Git host doesn't really make a difference.