I worked with Antonio on prototyping the extensions system[0]. In other words, Antonio got to stress test the pair programming collaboration tech while I ran around in a little corner of the zed codebase and asked a billion questions. While working on zed, Antonio taught me how to talk about code and make changes purposefully. I learned that the best solution is the one that shows the reader how it was derived. It was a great summer, as far as summers go!
I'm glad the editor is open source and that people are willing to pay for well-engineered AI integrations; I think originally, before AI had taken off, the business model for zed was something along the lines of a per-seat model for teams that used collaborative features. I still use zed daily and I hope the team can keep working on it for a long time.
[0]: Extensions were originally written in Lua, which didn't have the properties we wanted, so we moved to Wasm, which is fast + sandboxed + cross-language. After I left, it looks like Max and Marshall picked up the work and moved from the original serde+bincode ABI to Wasm interface types, which makes me happy: https://zed.dev/blog/zed-decoded-extensions. I have a blog post draft about the early history of Zed and how extensions with direct access to GPUI and CRDTs could turn Zed from a collaborative code editor into a full-blown collaborative application platform. The post needs a lot of work (and I should probably reach out to the team) before I publish it. And I have finals next week. Sigh. Some day!
I've been trying to be active, create issues, help in any way I can, but the focus on AI tells me Zed is no longer an editor for me.
Do you think GPL3 will serve as an impediment to their revenue or future venture fundraising? I assume not, since Cursor and Windsurf were forks of MIT-licensed VS Code. And both of them are entirely dependent on Microsoft's goodwill to continue developing VS Code in the open.
Tangentially, do you think this model of "tool" + "curated model aggregator" + "open source" would be useful for other, non-developer fields? Would an AI art tool with sculpting and drawing benefit from being open source? I've talked with VCs that love open developer tools and they hate on the idea of open creative tools for designers, illustrators, filmmakers, and other creatives. I don't quite get it, because Blender and Krita have millions of users. Comfy is kind of in that space, it's just not very user-friendly.
Good luck on finals!
I learned something from that code, cool stuff!
One question: how do you handle cutting a new breaking change in wit? Does it take a lot of time to deal with all the boilerplate when you copy things around?
I check back on the GitHub issue every few months and it just has more votes and more supportive comments, but no acknowledgement.
Hopefully someone can rescue us from the sluggish VS Code.
https://github.com/zed-industries/zed/issues/7992
I have a 1440p monitor and seeing this issue.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
Setting on macos was called "use font smoothing when available".
> The entire Zed code editor is open source under GPL version 3, and scratch-built in Rust all the way down to handcrafted GPU shaders and OS graphics API calls.
When I saw this, I immediately wondered what strange rendering bugs Zim might run into. This was before reading your comment.In my opinion, this type of graphics work is not the core functionality of a text editor, same has been solved already in libraries. There is no reason to reinvent that wheel... Or if there is then please mention why.
In a world full of electron based apps, I appreciate anyone who dares to do things differently.
try it and see. i bet that helps/fixes at least some of you suffering from this.
I have the same issue with macOS in general, and I don't understand how anyone can use it on a normal DPI monitor.
I'm guessing zed implemented their own text rendering without either hinting or subpixel rendering or both.
(Or are you using it in vertical orientation?)
It looks like the relevant work needs to be done upstream.
I don't know the internals of Zed well, but it seems entirely plausible they're doing text rendering from scratch.
Apple has removed support for font rendering methods which make text on non-integer scaled screens look sharper. As a result, if you want to use your screen without blurry text, you have to use 1080p (1x), 4k (2x 1080p), 5k (2x 1440p) or 6k screens (or any other screens where integer scaling looks ok).
To see the difference, try connecting a Windows/Linux machine to your monitor and comparing how text looks compared to the same screen with a MacOS device.
Example Zed screenshot, using "Ayu Light": https://i.ibb.co/Nr8SjvR/Screenshot-from-2024-07-28-13-11-10...
Same code in VS Code: https://i.ibb.co/YZfPXvZ/Screenshot-from-2024-07-28-13-13-41...
using pixel fonts on any non-integer multiplier of the native resolution will always result in horrible font rendering, I don't care what OS you're on.
I use MacOS on all kinds of displays as I move throughout the day, some of them are 1x, some are 2x, and some are somewhere in between. using a vector font in Zed looks fine on all of them. It did not look fine when I used a pixel font that I created for myself, but that's how pixel fonts work, not the fault of MacOS.
Apparently all editors bothered doing, except Zed.
From the Issue:
> Zed looks great on my MacBook screen, but looks bad when I dock to my 1080p monitor. No other editor has that problem for some reason.
If they're running everything on the GPU then their SDF text rendering needs more work to be resolution independent. I'm assuming they use SDFs, or some variant of that.
Really, the screen isn't the issue given that on other editors OP says it is fine.
Knuth would be angry reading this :)
The restore checkpoint/redo is too linear for my lizard brain. Am I wrong to want a tree-based agentic IDE? Why has nobody built it?
They fixed that with the new agent panel, which now works more like the other AI sidebars.
I was (mildly) annoyed by that too. The new UI still has rough edges but I like the change.
Vote/read-up here for the feature on Zed: https://github.com/zed-industries/zed/issues/17455
And here on VSCode: https://github.com/microsoft/vscode/issues/20889
I would recommend you check it out if you've been frustrated by the other options out there - I've been very happy with it. I'm fairly sure you can't have git-like dag trees, nor do I think that would be particularly useful for AI based workflow - you'd have to delegate rebasing and merge conflict resolution to the agent itself... lots of potential for disaster there, at least for now.
What I don't like in the last update is that they removed the multi-tabs in the assistant. Previously I could have multiple conversations going and switch easily, but now I can only do one thing at a time :(
Haven't tried the assistant2 much, mostly because I'm so comfy with my current setup
You will not catch me using the words "agentic IDE" to describe what I'm doing because its primary purpose isn't to be used by AI any more than the primary purpose of a car is to drive itself.
But yes, what I am doing is creating an IDE where the primary integration surface for humans, scripts, and AIs is not the 2D text buffer, but the embedded tree structure of the code. Zed almost gets there and it's maddening to me that they don't embrace it fully. I think once I show them what the stakes of the game are they have the engineering talent to catch up.
The main reason it hasn't been done is that we're still all basically writing code on paper. All of the most modern tools that people are using, they're still basically just digitizations of punchcard programming. If you dig down through all the layers of abstractions at the very bottom is line and column, that telltale hint of paper's two-dimensionality. And because line and column get baked into every integration surface, the limitations of IDEs are the limitations of paper. When you frame the task of programming as "write a huge amount of text out on paper" it's no wonder that people turn to LLMs to do it.
For the integration layer using the tree as the primary means you get to stop worrying about a valid tree layer blinking into and out of existence constantly, which is conceptually what happens when someone types code syntax in left to right. They put an opening brace in, then later a closing brace. In between a valid tree representation has ceased to exist.
How can I follow up on what you're building? Would you be open to having a chat? I've found your github, but let me know how if there's a better way to contact you.
> For the nth time, it's about enabling inline suggestions and letting anything, either LSP or Extensions use it, then you don't have to guess what the coolest LLM is, you just have a generic useful interface for LLM's or anything else to use.
An argument I would agree with is that it's unreasonable to expect Helix's maintainers to volunteer their time toward building and maintaining functionality they don't personally care about.
[1]: https://microsoft.github.io/language-server-protocol/specifi...
These last two months I've been trialing both Neovim and Zed alongside Helix. I know I should probably just use Neovim since, once set up properly, it can do anything and everything. But configuring it has brought little joy. And once set up to do the same as Helix out of the box, it's noticeably slower.
Zed is the first editor I've tried that actually feels as fast as Helix while also offering AI tooling. I like how integrated everything is. The inline assistant uses context from the chat assistant. Code blocks are easy to copy from the chat panel to a buffer. The changes made by the coding agent can be individually reviewed and accepted or rejected. It's a lot of small details done right that add up to a tool that I'm genuinely becoming confident about using.
Also, there's a Helix keymap, although it doesn't seem as complete as the Vim keymap, which is what I've been using.
Still, I hope there will come a time when Helix users can have more than just Helix + Aider, because I prefer my editor inside a terminal (Helix) rather than my terminal inside an editor (Zed).
Also, the Helix way, thus far, has been to build a LSP for all the things, so I guess you'd make a copilot LSP (I be there already is one).
The only project I know of that recognizes this is https://github.com/SilasMarvin/lsp-ai, which pivoted away from completions to chat interactions via code actions.
I don't know the LSP spec well enough to know if these sort of complex interactions would work with it, but it seems super out of scope for it imo.
And yet, it's hard to ignore the fact that coding practices are undergoing a one-in-a-generation shift, and experienced programmers are benefiting most from it. Many of us had to ditch the comfort of terminal editors and switch to Microsoft's VSCode clones just to have these new incredible powers and productivity boosts.
Having AI code assistants built into the fast terminal editor sounds like a dream. And editors like Helix could totally deliver here if the authors were a bit more open to the idea.
edit: they updated the AI panel! looking good!
Man, so true. I tried this out a while back and it was pretty miserable to find docs, apis, etc.
IIRC they even practice a lot of bulk reexports and glob imports and so it was super difficult to find where the hell things come from, and thus find docs/source to understand how to use something or achieve something.
Super frustrating because the UI of Zed was so damn good. I wanted to replicate hah.
Have you had a chance to try the new panel? (The OP is announcing its launch today!)
The annoncement is about it reaching prod release, but they emailed people to try it out in the preview version.
edit: yes i missed something. i see the new feature. hell yeah!
I'm on PopOS and the issue ended up being DRI_PRIME.
Might be worth trying `DRI_PRIME=0 zed`.
At least it did a month or so ago, and at that time I couldn't figure out a practical use for the LLM-integration either so I kind of just went back to dumb old vim and IDEA Ultimate.
When its fast its pretty snappy though. I recently put revisiting emacs on my todo-list, should add taking Zed out for another round as well.
Edit: I just saw your edit to your reply here[1] and that's indeed what's happening. Now the question is “why does that happen?”.
[1]
Iced, being used by System76's COSMIC EPOCH, is not great in what regards? Serious question.
IMO Slint is milestones ahead and better. They've even built out solid extensions for using their UI DSL, and they have pages and pages of docs. Of course everything has tradeoffs, and their licensing is funky to me.
I wouldn’t hold my breath. GPUI is built specifically for Zed. It is in its monorepo without separate releases and lots of breaking changes all the time. It is pretty tailored to making a text editor rather than being a reusable GUI framework.
i think there's some desire from within zed to making this a real thing for others to reuse.
Waiting for Robius / Makepad to mature a bit more. Looks very promising.
Went from Atom, to VSC, to Vim and finally to Zed. Never felt more at home. Highly recommend giving it a try.
AFAIK there is overlap between Atoms and Zeds developers. They built Electron to built Atom. For Zed they built gpui, which renders the UI on the GPU for better performance. In case you are looking for an interesting candidate for building multi platform GUIs in rust, you can try gpui yourself.
But apropos TFA, it's nice to see that telemetry is opt-in, not opt-out.
Subscribed to their paid plan just to keep the lights on and hoping it will get even better in the future.
It's open source, builds extremely well out of the box, and the UI is declarative.
Also I don't want to pay with my private data from some of my systems. So I don't ever want to sign in on those systems and just have a useless button sitting there.
One way you could use LLMs w/o inducing brain mush would be for code or design reviews, testability, etc.
If you see codebases you like, stash them away for AI explanation later.
This was a long time ago, but the way I did it was to use XcodeGen (1) and a simple Makefile. I have an example repo here (2) but it was before Swift Package Manager (using Carthage instead). If I remember correctly XcodeGen has support for Swift Package Manager now.
On top of that I was coding in VS Code at the time, and just ran `make run` in the terminal pane when I wanted to run the app.
Now, with SwiftUI, I'm not sure how it would be to not use Xcode. But personally, I've never really vibed with Xcode, and very much prefer using Zed...
1: https://github.com/yonaskolb/XcodeGen 2: https://github.com/LinusU/Soon
Tried using zed on Linux (pop os, Nvidia) several months ago, was terribly slow, ~1s to open right click context window.
I've spent some time debugging this, and turns out that my GPU drivers are not the best with my current pop os release, but I still don't understand how it might take so long and how GPU is related to right clicking.
Switched back to emacs, love every second. :)
I'm not sure if title referring to actual development speed or the editor performance.
p.s. I play top games on Linux, all is fine with my GPU & drivers.
It seems Vulkan support, the only GPU rendering API Zed uses, isn't well supported by any of the Debian derivatives. The libraries are only installed and working in Ubuntu 24.04 in Gnome Wayland sessions for example (Ubuntu 24.04 doesn't have KDE new enough for Wayland support).
And there are also bugs in the Zed automatic GPU selection that will intermittently cause it to pick the wrong GPU in a system with multiple (E.g. a discreet GPU and a motherboard with integrated graphics). Vulkan can only run on the primary rendering GPU, but it doesn't always pick that one, and doesn't support trying any others after the first one or picks (it seems), so it just falls back to emulated.
For reference, I had to spend 4 days getting Zed to install as part of a Nix home-manager config with nixGL because out of the box it failed to use the GPU on 2 of 3 systems. But after forcing it to use the right GPU with a wrapper that had Vulkan support (a nixGL wrapper) all 3 systems worked fine (so it's a Zed assumption/bug problem).
Also, the fact that Zed without the Vulkan supported hardware rendering is unusably slow is a big problem. It's far slower than anything else on the system and cranks the CPU to 100 with its "emulated GPU" workaround. That's not acceptable, they really need to get at least basic performance for the seeming majority of target systems that don't/can't meet the hardware rendering needs.
I will keep playing around with it to see if it's worth switching (from JetBrains WebStorm).
Nvidia drivers in particular are terrible on Linux, so what OP is describing is likely some compatibility/version issue.
These simple, composable tools can be utilized well enough by increasingly powerful LLM(s), especially Gemini 2.5 pro to achieve most tasks in a consistent, understandable way.
More importantly - I can just switch off the 'ask' tool for the agent to go full turbo mode without frequent manual confirmation.
I just released it yesterday, have a look at https://github.com/aperoc/toolkami for the implementation if you think it is useful for you!
Yours is the full agent, though... Nice.
[1] https://github.com/karthink/gptel
It's like lisp's original seven operators: quote, atom, eq, car, cdr, cons and cond.
And I still can't stop smiling just watching the agent go full turbo mode when I disable the `ask` tool.
you can choose which tools are used in zed by creating a new "tools profile" or editing an existing one (also you can add new tools using MCP protocol)
The goal is composable semantic routing -- seamlessly traversal between different tools through things like saved outputs and conversational partials.
Routing similar to pipewire, conversation chains similar to git, and URI addressable conversations similar to xpath.
This is being built application down to ensure usability, design sanity and functionality.
While the initial 400 error is a bummer, I am actually surprised and admire its persistence in trying to create the file and in the end finding a way to do so. It forgot to define a couple of stuff in the code, which was trivial to fix, after that the code was working.
If you're okay sharing the conversation with us, would you mind pressing the thumbs-down button at the bottom of the thread so that we can see what input led to the 400?
(We can't see the contents of the thread unless you opt into sharing it with the thumbs-down button.)
(I've yet to dive deep into AI coding tools and currently use Zed as an 'open source Sublime Text alternative' because I like the low latency editing.)
I don't know what Zed's doing under the hood but the diffing tool has yet to fail on me (compared to multiple times per conversation in Cursor). Compared to previous Zed AI iterations, this one edits files much more willingly and clearly communicates what it's editing. It's also faster than Claude Code at getting up to speed on context and much faster than Cursor or Windsurf.
Apart from that, it's a hell of a lot better than alternatives, and my god is it fast. When I think about the perfect IDE (for my taste), this is getting pretty close.
Anyway you can always make your prompts to do or not do certain actions, they are adding more features, if you want you can ignore some of them - this is not contradictory.
Ah! So you can get that experience with the agent panel (despite "agent" being in the name).
If you click the dropdown next to the model (it will say "Write" by default) and change it from "Write" to "Minimal" then it disables all the agentic tool use and becomes an ordinary back-and-forth chat with an LLM where you can add context manually if you like.
Also, you can press the three-dots menu in the upper-right and choose New Text Thread if you want something more customizable but still not agentic.
I’ve been using PyCharm Professional for over a decade (after an even longer time with emacs).
I keep trying to switch to vscode, Cursor, etc. as they seem to be well liked by their users.
Recently I’ve also tried Zed.
But the Jetbrains suite of tools for refactoring, debugging, and general “intelligence” keep me going back. I know I’m not the only one.
For those of you that love these vscode-like editors that have previously used more integrated IDEs, what does your setup look like?
But Zed is a complete rewrite, which on one hand makes itsuper-fast, but otherwise is still super-lacking of integration with the existing vsix extensions, language servers, and what not. Many authors in this forum totally fail to see that SublimeText4 is super ultra fast also compared to Electron-based editors, but is not even close in terms of supported extensions.
The whole Cursor hysteria may abruptly end with CoPilot/Cline/Continue advancing, and honestly, havng used both - there isnt much difference in the final result, should you know what you are doing.
[0] https://plugins.jetbrains.com/plugin/20540-windsurf-plugin-f...
I've heard decent things about the Windsurf extension in PyCharm, but not being able to use a local LLM is an absolute non-starter for me.
At the moment I’m using Claude Code in a dedicated terminal next to my Jetbrains IDE and am reasonably happy with the combination.
I've learned to work around the loss of some functionality over the past 6 months since I've switched and it hasn't been too bad. The AI features in Zed have been great and I'm looking forward to the debugger release so I can finally run and debug tests in Zed.
This isn't a great solution, but in cases where I've wanted to try out Cursor on a Java code base, I just open the project in both IDEs. I'll do AI-based edits with Cursor, and if I need to go clean them up or, you know, write my own code, I'll just switch over to IntelliJ.
Again, that's not the smoothest solution, but the vast majority of my work lately has been in Javascript, so for the occasional dip into Java-land, "dual-wielding" IDEs has been workable enough.
Cursor/Code handle JS codebases just fine - Webstorm is a little better maybe, but not the "leaps and bounds" difference between Code and IntelliJ - so for JS, I just live in Cursor these days.
vscode running a typescript extension (cline, gemini, cursor, etc) to achieve LLM-enhanced coding is probably the least efficient way to do it in terms of cpu usage, but the features they bring are what actually speeds up your development tasks - not the "responsiveness" of it all. It seems that we're making text editing and html rendering out to be a giant lift on the system when it's really not a huge part of the equation for most people using LLM tooling in their coding workflows.
Maybe I'm wrong but when I looked at zed last (about 2 months ago) the AI workflow was surprisingly clunky and while the editor was fast, the lack of tooling support and model selection/customization left me heading back to vscode/cline which has been getting nearly two updates per week since that time - each adding excellent new functionality.
Does responsiveness trump features and function?
I'm curious what you think of this launch! :D
We've overhauled the entire workflow - the OP link describes how it works now.
This is clearly a Markdown backend problem, but not really relevant in the editor arena, except maybe to realize that the editor "shell" latency is just a part of the overall latency problem.
I still keep it around as I do with other editors that I like, and sometimes use it for minor things, while waiting to get something good.
On this note, I think there's room for an open source pluggable PKM as an alternative to Obsidian and think Zed is a great candidate. Unfortunately I don't have time to build it myself just yet.
I'm also super interested in building this. OTOH Obsidian has a huge advantage for its plugin ecosystem because it is just so hackable.
One of the creators of Zed talked about their experience building Atom - at the time the plugin API was just wide open (which resulted in a ton of cool stuff, but also made it harder to keep building). They've taken a much stricter Plugin API approach in Zed vs. Atom, but I think the former approach is working out well for Obsidian's plugin ecosystem.
So far the only editor I've found that does this is Typora.
The pricing page was not linked on the homepage. Maybe it was, maybe it wasn't but it surely was not obvious to me.
Regardless of how good of a software it is or pretends to be I just do not care about landing pages anymore. Pricing page essentially tells me what I am actually dealing with. I knew about Zed when it was being advertised as "written in rust because it makes us better than everyone" trend everyone was doing. Now, it is LLM based.
Absolutely not complaining about them. Zed did position themselves well to take the crown of the multi billion dollar industry AI code editors has become. I had to write this wall of text of because I just want to just drop the pricing page link and help make people make their own decision, but I have to reply to "whats your point" comments and this should demonstrate I have no point aside from dropping a link.
> ... 3. Baked into a closed-source fork of an open-source fork of a web browser
I laughed out loud at this one.
You can sign up for the beta here - https://zed.dev/debugger - or build from source right now.
The free pricing is a bit confusing, it says 50 prompts/month, but also BYO API keys
So even if I use my own API keys, the prompts will stop at 50 per month?
Also, since it’s open source, couldn’t just someone remove the limit? (I guess that wouldn’t work if the limit is of some service provided by Zed)
I also laughed at the dig on VSCode at the start. For the unaware, the team behind Zed was originally working on Atom.
There are dozens of possible build tools for C and C++, all with complex syntax and most with mandatory user provided input to configure the build. For anything beyond simple syntax highlighting, you need to be able to context parse all the multi-file cross references and inputs that can only come from building the entire project with preprocessing and then parsing the LLM (the intermediate syntax, not the AI thing). For most projects that are nontrivial, a compilation cycle can be 10 minutes to 4+ hours, and requires the specific settings you want to build with. Breaking them down to per-file also doesn't work because you'd have to do a complete dry run execution of the build system just to get the specific toolchain build settings for each file. And remember there are dozens of possible build tools that your tool has to emulate a dry run of now.
Most tools I've seen can only make a half attempt at C/C++ as a result, and usually the solutions scale incredibly poorly. The basic CTags for example, that just indexes symbols in your project source code, easily generates a >4 GB database file on something like a Yocto build. Which is why they invented Exuberant CTags that uses a binary database to try and speed it up. But even still, you're getting almost no useful context from results, and it has a very long lag in response when you do ask something.
The AI LLM support for C and C++ seems able to make guesses with the partial info that's available to them, whether its only the one file of context or the whole project (very uncommon), but it has the lowest successful output rate of any context helper I've ever used.
Here's a nice recent post about it: https://felix-knorr.net/posts/2025-03-16-helix-review.html
I'm catching up on Zed architecture using deepwiki: https://deepwiki.com/zed-industries/zed
But I got back on the horse & broke out Zed this weekend, deciding that I'd give it another shot, and this time be more deliberate about providing context.
My first thought was that I'd just use Zed's /fetch and slam some crates.io docs into context. But there were dozens and dozens of pages to cover the API surface, and I decided that while this might work, it wasn't a process I would ever be happy repeating.
So, I went looking for some kind of Crates.io or Rust MCP. Pretty early looking effort, but I found cratedocs-mcp. It can search crates, lookup docs for crates,lookup specific members in crates; that seems like maybe it might be sufficient, maybe it might help. Pulled it down, built it... https://github.com/d6e/cratedocs-mcp
Then check the Zed docs for how to use this MCP server. Oh man, I need to create my own Zed extension to use an MCP service? Copy paste this postgres-context-extension? Doesn't seem horrendous, but I was pretty deflated at this side-quest continuing to tack on new objectives & gave up on the MCP idea. It feels like there should be some kind of builtin glue that lets Zed add MCP servers via configuration, instead of via creating a whole new extension!!
On the plus side, I did give DeepSeek a try and it kicked out pretty good code on the first try. Definitely some bits to fix, but pretty manageable I think, seems structurally reasonably good?
I don't know really know how MCP tool integration works in the rest of the AI ecosystem, but this felt sub ideal.
The extensions are just for more ease of use as they install the server as well. A one click solution.
VS Code forks (Cursor and Windsurf) were extremely slow and buggy for me (much more so than VS Code, despite using only the most vanilla extensions).
Personally, I just use the terminal for my build tools and Zed talks to clangd just fine for autocomplete etc.
I have run into some problems with it on both Linux and Mac where zed hangs if the computer goes to sleep (meaning when the computer wakes back up, zed is hung and has to be forcibly quit.
Haven't tried the AI agent much yet though. Was using CoPilot, now mostly Claude Code, and the Jetbrains AI agent (with Claude 3.7).
But I'm not sure how to get predictions working.
When the predictions on-ramp window popped up asking if I wanted to enable it, I clicked yes and then it prompted me to sign in to Github. Upon approving the request on Github, an error popover over the prediction menubar item at the bottom said "entity not found" or something.
Not sure if that's related (Zed shows that I'm signed in despite that) but I can't seem to get prediction working. e.g. "Predict edit at cursor" seems to no-op.
Anyways, the onboarding was pretty sweet aside from that. The "Enable Vim mode" on the launch screen was a nice touch.
You have to command-pallet to "assistant: show configuration" to setup almost any API or integration except the Zed "Zeta AI", and that configuration directly conflicts with the authentication needed for the Zed login. So you currently can't use a third-party authenticated AI engine and the Zed Collaboration features at the same time.
Once you've setup the third party AI configuration, you then have to open the settings.json and copy the "features" section from the default settings into it manually. Ignore the blog posts and docs from Zed, they're all wrong now that they've completely changed/broken everything with the "Zeta AI" release. In the copied "features" section of your settings.json, you have to set the value for the predictions to the name of your third party AI engine. Good luck guessing what the right string value is, the values for each engine aren't documented anywhere I can find.
Basically, by default:
- You have the chat
- Inline edits you do use the chat as context
And that is extremely powerful. You can easily dump stuff into the chat, and talk about the design, and then implement it via surgical inline edits (quickly).
That said, I wasn't able to switch to Zed fully from Goland, so I was switching between the two, and recently used Claude Code to generate a plugin for Goland that does chat and inline edits similarly to how the old Zed AI assistant did it (not this newly launched one) - with a raw markdown editable chat, and inline edits using that as context.
https://zed.dev/blog/fastest-ai-code-editor
It's fast paced, yet it doesn't blush over anything I'd find important. It shows clearly how to use it, shows a realistic use case, e.g. the model adding some nonsense, but catching something the author might have missed, etc. I don't think I've seen a better AI demo anywhere.
Maybe the bar is really low that I get excited about someone who demos an LLM integration for programmers to actually understand programming, but hey.
When any video starts by asking AI "Make me a todo app" I loose interest right away
That feature + native Git support has fully replaced VSCode for me.
Starting out with a much smaller ecosystem than already-popular alternatives is a totally normal part of the road to success. :)
Does it not do incremental edits like Cursor? It seems like the LLM is typing out the whole file internally for every edit instead of diffs, and then re-generates the whole file again when it types it out into the editor.
We actually stream edits and apply them incrementally as the LLM produces them.
Sometimes we've observed the architect model (what drives the agentic loop) decide to rewrite a whole file when certain edits fail for various reasons.
It would be great if you could press the thumbs-down button at the end of the thread in Zed so we can investigate what might be happening here!
Firstly, when navigating in a large python repository, looking up references was extremely slow (sometimes on the order of minutes).
Secondly, searching for a string in the repo would sometimes be incorrect (e.g. I know the string exists but Zed says there aren't any results, as if a search index hasn't been updated). These two issues made it unusable.
I've been using PyCharm recently and found it to be far superior to anything else for Python. JetBrains builds really solid software.
That's nice for the chat panel, but the tab completion engine surprisingly still doesn't officially support a local, private option.[0]
Especially with Zed's Zeta model being open[1], it seems like there should be a way to use that open model locally, or what's the point?
I might be missing the obvious, and I get no standard exists, but why aren't AI coding assistants just plugins?
I don't actually know their thinking but I know that for the VSCode ones (fork or extension), I tend to have at least 2 AIs at any point in time and compare them in my daily work. Probably when this field matures, lock-in will be more common, and you need control of the entire editor for that.
Now I'm excited that they actually have a Cursor-like agentic mode.
But the suggestions are still just nowhere near as "smart" as the ones from Cursor. I don't know if that's model selection or what. I can't even tell which model is being used for the suggestions.
Today I'm trying to use the Agentic stuff, I added an MCP server, and I keep getting non-stop errors even though I started the Pro trial.
First error: It keeps trying to connect to Copilot even though I cancelled my Copilot subscription. So I had to manually kill the Copilot connection.
Second Error: Added the JIRA MCP (it's working since Zed lists all the available tools in the MCP) and then asked a basic question (give me the 5 most recent tickets). Nope. Error interacting with the model, some OAuth error.
Third Weirdness (not error): Even though I'm on a Pro trial, the "Zed" agent configuration says "You have basic access to models from Anthropic through the Zed Free AI Plan" – aren't I on a Pro trial? I want to give you money guys, please, let me do that. I want to encourage a high performance editor to grow.
I'm not even trying to do anything fancy. I just am on a pro trial. Shouldn't this be the happiest of happy paths? Zed should use whatever the Pro stuff gives you, without any OAuth errors, etc. How can I help the Zed team debug this stuff? Not even sure where to start.
I also added an elixir RuleSet (I THINK it's being used, but can't easily tell).
Still missing the truly fast and elegant suggestions from Cursor (especially when Cursor suggests _removing_ lines, haven't seen that in Zed yet). But I can see it getting there.
Some agents stuff also worked well. I had it fix two elixir warnings and a rust warning in our NIF.
Unrelated to Zed, I find myself in the awkward position of maintaining a (very small) rust file in our code base without ever having coded rust. And any changes, upgrades, etc are done via AI.
So far it seems to work (according to our unit tests) and the library isn't in any critical path. But it's a new world :-)
Also, Zed still seems to only give me access to "basic" models even though I'm in the pro tier trial. Not sure if that's a bug.
Edit: Sorry, apparently this is supported. I'll give it a go!
I switched to cursor earlier this year to try out LLM assisted development and realised how much I now despise vscode. It’s slow, memory hungry, and just doesn’t work as well (and in a keyboard centric way) as Zed.
Then a couple of weeks ago, I switched back to Zed, using the agents beta. AI in Zed doesn’t feel quite as polished as cursor (at least, edit predictions don’t feel as good or fast), but the agent mode works pretty well now. I still use cursor a little because anything that isn’t vscode or pycharm has imho a pretty bad Python LSP experience (those two do better because they use proprietary LSP’s), but I’m slowly migrating to full stack typescript (and some Gleam), so hope to fully ditch cursor in favour of Zed soon.
Other than that a beautiful editor.
``` "openai": { "api_url": "https://openrouter.ai/api/v1", "version": "1", "available_models": [ { "name": "anthropic/claude-3.7-sonnet:beta", "max_tokens": 200000 }, ... ```
Just change api_url in the zed settings and add models you want manually.
If they had focused on
1. Feature-parity with the top 10 VSCode extensions (for the most common beaten path — vim keybindings, popular LSPs, etc) and
2. Implemented Cursor's Tab
3. Simple chat interface that I can easily add context from the currently loaded repo
I would switch in a beat.
I _really_ want something better than VSCode and nvim. But this ain't it. While "agentic coding" is a nice feature, and specially so for "vibe coding projects", I (and most of my peers) don't rely on it that much for daily driving their work. It's nice for having less critical things going on at once, but as long as I'm expected to produce code, both of the features highlighted are what _effectively_ makes me more productive.
1. Zed has been working great for me for ~1.5 years while I ignored its AI features (I only started using Zed's AI features in the past 2 weeks). Vim keybindings are better IMHO than every other non-vim editor and the LSP's I've used (typescript, clangd, gleam) have worked perfectly.
2. The edit prediction feature is almost there. I do still prefer Cursor for this, but its not so far ahead that I feel like I want to use Cursor and personally I find Zed to be a much more pleasant editor to use than vscode.
3. When you switch the agent panel from "write" to "ask" mode, its basically that, no?
I'm not into vide coding at all, I think AI code is still 90% trash, but I do find it useful for certain tasks, repetitive edits, and boilerplate, or just for generating a first pass at a React UI while I do the logic. For this, Zed's agent feature has worked very well and I quite like the "follow mode" as a way to see what the AI is changing so I can build a better mental model of the changes I'm about to review.
I do wish there was a bit more focus on some core editor features: ligatures still don't fully work on Linux; why can't I pop the agent panel (or any other panel for that matter) into the center editor region, or have more than one panel docked side by side on one of the screen sides? But overall, I largely have the opposite opinion and experience from you. Most of my complaints from last year have been solved (various vim compatibility things), or are in progress (debugger support is on the way).
Huh?
I work at Zed and I like using Rust daily for my job, but outside work I also like Elm, and Zig, and am working on https://www.roc-lang.org
sorry, but to me it is just pure garbage.
Is this what happens to people who choose to learn Rust?
Joking aside, this is interesting, but I'm not sure what the selling point is versus most other AI IDEs out there? While it's great that you support ollama, practically speaking, approximately nobody is getting much mileage out of local models for complex coding tasks, and the privacy issues for most come from the LLM provider rather than the IDE provider.