However, personally, I prefer to have it configured to talk directly to Anthropic, to limit the number of intermediaries seeing my code, but in general I can see myself using this in the future.
More importantly, I’m happy that they might be closing in on a good revenue stream. I don’t yet see the viability of the collaboration feature as a business model, and I was worried they’re gonna have trouble finding a way to sensibly monetize Zed and quit it at some point. This looks like a very sensible way, one that doesn’t cannibalize the open-source offering, and one that I can imagine working.
Fingers crossed, and good luck to them!
Same. I can kind of feel OK about my code going to Anthropic, but I can't have it going through another third party as well.
This is unfortunately IT/security's worst nightmare. Thousands of excitable developers are going to be pumping proprietary code through this without approval.
(I have been daily driving Zed for a few months now - I want to try this, I'm just sceptical for the reason above.)
"assistant": {
"version": "2",
"default_model": {
"provider": "anthropic",
"model": "claude-3-5-sonnet-20240620"
}
}
Once this is done, you should be able to use Anthropic if you have an API key. (This was available before today's announcement and still works today as of Zed 0.149.3)The editor is open-source, and it being open-source is great. Others can contribute, it generally helps adoption, it will probably help with getting people to author plugins, and it means if they go in an undesirable way, the community will be able to fork.
So without making it non-open-source, they’d need to do open-core, which is incredibly hard to pull off, as you usually end up cannibalizing features in the open-source version (even if you have contributors willing to contribute them, you block it to not sabotage your revenue stream).
Open source is great and fun and an incredible force multiplier for the world, but when you want to do this stuff for a living, you have to charge money for it somehow, and if you're a software business, and not just software adjacent, it means charging for software.
They could put the code on GitHub, allowing contributions, with a license that turns into BSD or MIT after two years, and with the caveat that you can only run the (new) code if you purchase a license key first.
In companies of reasonable size, the deterent against piracy is the existence of a license itself, the actual copy protection and its strength aren't as important. The reason those companies (mostly) don't crack software isn't that the software is hard to crack, it's that their lawyers wouldn't let them.
Sure, this would make Zed somewhat easier to crack, but I think the subset of users who wouldn't use a Zed crack if Zed was binary-only but would use one if there was source code available is very small.
There must be some other way to monetize open source.
But that seems really tough to find, for some reason.
Zed is so close, but I’d much rather see a focus on the “programmable” part and let the AI and collaboration features emerge later out of rich extensibility (i.e. as plugins, perhaps even paid plugins) than have them built-in behind a sign-in and unknown future pricing model.
Warp Terminal is a similar story, >$50M in funding for a terminal emulator of all things...
Also, Spacemacs? It's technically a terminal but definitely has a lot of UI features. Very programmable.
It's called TextAdept. Much of it is itself built on its own Lua extensibility story, which runs on a fairly compact C core. Both native GUI and terminal versions, using the same user config (keybinds etc). Linux, Mac OS, Windows builds. LSP support built in. Plenty of community-produced extensions around (but of course not as vast a range as VSCode's VSX eco-system furnishes).
"assistant": { "enabled": false, }
What would that be for each OS?
Linux: Kate (at least if using KDE; which one would it be for GTK / Gnome?)
macOS: TextMate?
Windows: Notepad++?
It is significantly less featureful than Kate or your other apps though.
I really love the Documents Tree plugin and could never go back to old style tabs.
NotepadNext – a cross-platform reimplementation of Notepad++ | Hacker News https://news.ycombinator.com/item?id=39854182
This means 100x more effort in the long run for a cross platform editor. Maybe if developers lived for 200 years, this could be possible. Will need to solve human ageing problem before the cross platform "native GUI" problem.
Extensibility of neovim or emacs covers all my text editor use cases.
Native GUIs offer far better accessibility (TUIs are not screen-reader accessible, and neither is Emacs' GUI currently), hugely improved UI flexibility and consistent developer APIs (Emacs GUI is inconsistent across platforms and tricky to work with, every Neovim plugin reinvents ways to draw modals/text input because there's no consistent API), reduced redraw quirks, better performance, better debugging (as a Neovim plugin dev I don't want to spend time debugging user reports that relate to the user's choice of terminal emulator this week and not to Neovim or my plugin code).
This is a great start but it's far from what most would accept as "programmable" or richly extensible.
My biggest gripe was how bad the AI was. I really want a heavy and well-crafter AI in my editor, like Cursor, but I don't want a fork of the (hugely bloated and slow) vscode, and I trust the Zed engineering team much more to nail this.
I am very excited about this announcement. I hope they shift focus from the real-time features (make no sense to me) to AI.
This was maybe 3-4 months ago, so I'm excited to try Zed again.
Where it really shines for me is repetitive crap I would usually put off. The other day I was working with an XML config and creating a class to store the config so it could be marshalled/unmarshalled.
It picked up on the sample config file in the repo and started auto-suggesting attributes from the config file, in the order they appear in the config file, even though the config was camel cased and my attributes were snake cased.
The only thing it didn’t do correctly was expand a certain acronym in variable names like I had done on other attributes. In fairness, the acronym is unclear, which is why I was expanding it, and I wouldn’t be surprised if a human did the same.
> A private beta of the Claude 3.5 Sonnet's new Fast Edit Mode, optimized for text editing. This upcoming mode achieves unprecedented speed in transforming existing text, enabling near-instantaneous code refactoring and document editing at scale.
Stackoverflow is used when im stuck and searching around for an answer. Its not attempting to do the work for me. At a code level I almost never copy paste from stackoverflow.
I also utilize claud and 4o at the same time while attempting to solve a problem but they are rarely able to help.
- Does this language have X (function, methods,...) probably because I know X from another language and X is what I need. If it does not, I will code it.
- How do I write X again? Mostly when I'm coming back to a language I haven't touch for a while. Again I know what I want to do, just forgot the minutia about how to write it.
- Why is X happening? Where X is some cryptic error from the toolchain. Especially with proprietary stuff. There's also how to do X where X is a particular combination of steps and the documentation is lacking. I heard to forums in that case to know what's happening or get sample code.
I only need the manual/references for the first two. And the last one needs only be done once. Accuracy is a key thing for these use cases and I'd prefer snippets and scaffold (deterministic) instead of LLMs for basic code generation.
Looking up with AI is worse because its WRONG a lot more. Random rabbit holes, misdirection. Stuff that SOUNDS right but is not. It takes a lot of time and energy to discern the wheat from the chaff.
Sure you can find misleading or outdated blogposts or forum discussions with a google search but the information is far more grounded in correctness then anything from an LLM.
“Aha!” They say, “I only realized my folly after the 25th time someone pointed out googling also takes time!”
Maybe there’s some interesting difference in experiences that shouldn’t just be dismissed.
Not enough attention is been given to this imbalance.
It is impressive having an AI that can write code for you, but an AI that helps me understand which code we (as a team) should write would be much more useful.
Maybe this is because I'm just not used to it, maybe the workflow isn't good enough, or maybe it's because I don't trust the model enough to summarize things correctly.
I do agree that this is an area that could use improvement and I see a lot of utility there.
The default workflow is to follow the commit history until I don't get to where and when the code in it's current shape was introduced. Then trying reading the commit message that generally link to a ticket and then acquire from tribal knowledge of the team why it was done like that. If it is still necessary, what can we do today instead, etc...
And similarly when designing new code that needs to integrate on existing piece of code... Why there are such constraints in place? Why was it done like that? Who in the team know best?
Personally, I think the problem is that if the AI got it wrong, it would waste you a lot of time trying to figure out whether it's wrong or not. It's similar to outdated comments.
what? ok. Nice ideals.
Here's roughly what I want. I want to be able to highlight some block of code, ask the AI to modify it in some way, and then I want to see a diff view of before/after that lets me accept or reject changes.
LLMs often get code slightly wrong. That's fine! Doesn't bother me at all. What I need is an interface that allows me to iterate on code AND helps me understand the changes.
As a concrete example I recently used Claude to help me write some Python matplotlib code. It took me roughly a dozen plus iterations. I had to use a separate diff tool so that I could understand what changes were being made. Blindly copy/pasting LLM code is insufficient.
That's exactly what this new set of Zed features lets you do.
Here's an animated GIF demo: https://gist.github.com/simonw/520fcd8ad5580e538ad16ed2d8b87...
If you squint, that's the same as using an IDE with first class git support and co-editing with a (junior) pair programmer that commits each thing you ask them to do locally, or just saves the file and lets you see stageable diffs you can reject instead of push.
Try the /commit workflow using aider.chat as a REPL in your terminal, with the same git repo open in whatever IDE you like that supports real time git sync.
The REPL talks to you in diffs, and you can undo commits, and of course your IDE shows you any Aider changes the same as it would show you any other devs' changes.
That said, I use Zed and while it doesn't have all the smarts of Aider, its inline integration is fantastic.
You can even edit the prompt after the fact if the diff doesn't show what you want and regenerate without having to start all over.
Can you make the diff side-by-side? I’ve always hated the “inline” terminal style diff view. My brain just can’t parse it. I need the side-by-side view that lets me see what the actual before/after code is.
Zed does that - here's a clip of it on some Python code:
> Add build time options to disable ML/AI features
https://github.com/zed-industries/zed/issues/6756
Just give me a good editor.
In my opinion, storing someone's credit card data online after purchase, without a clear option to delete it should be illegal.
Feature requests: have something like aider's repo-map, where context always contains high level map of whole project, and then LLM can suggest specific things to add to context.
Also, a big use case for me, is building up understanding of an unfamiliar code base, or part of a code base. "What the purpose of X module?", "How does X get turned into Y?".
For those, its helpful to give the LLM a high level map of the repo, and let it request more files into the context until it can answer the question.
( Often I'm in learning mode, so I don't yet know what the right files to include are yet. )
https://github.com/jackMort/ChatGPT.nvim
https://github.com/olimorris/codecompanion.nvim
How is typing "Add the WhileExpression struct here" better or easier than copy/pasting it with keyboard and/or mouse?
I want something that more quickly and directly follows my intent, not makes me play a word game. (I'm also worried it will turn into an iterative guessing game, where I have to find the right prompt to get it to do what I want, and check it for errors at every step.)
I'm already paying for OpenAI API access, definitely gonna try this
This is not a criticism of zed though, I simply have no interest. Much the contrary: I can only praise Zed as to how simple it is to disable all these integrations!
I wonder what this is. Have they finetuned a version which is good at producing diffs rather than replacing an entire file at once? In benchmarks sonnet 3.5 is better than most models when it comes to producing diffs but still does worse than when it replaces the whole file.
Moreover, there's plenty of quirks Windows has with respect to:
- Unicode (UTF-16 whereas the world is UTF-8; even Java uses UTF-8 nowadays, so it's only Windows where UTF-8 is awkward to use)
- filenames (lots of restrictions that don't exist on Mac or Linux)
- text encoding (the data in the same string type changes depending on the user's locale)
- UUIDs (stored in a mixed-endian format)
- limit of open files (much lower than Mac and Linux; breaks tools like Git)
If you write software in Java, Golang, or Node.js, you'll quickly encounter all of these issues and produce software with obscure bugs that only occur on Windows.
I'm not sure about Rust, but using languages that claim cross-platform support isn't enough to hide the details of an OS.
In every job I've had, the vast majority of devs were on Mac OS, with Linux coming in at a close second (usually senior devs). So I wasn't surprised Zed first supported Mac then Linux. Windows support is nice for students, game developers, and people maintaining legacy .NET software, but those people won't be paying for an editor.
Java uses mixed Latin1/UTF-16 strings. The Latin1 mode is used for compact storage of alphanumeric text as the name suggests: https://github.com/openjdk/jdk/blob/1ebf2cf639300728ffc02478...
Anyway, none of these sound like major hurdles. I think the bigger hurdles are going to be low-level APIs that Rust probably doesn't have nice wrappers for. File change notifications and... I don't know what. Managing windows. Drivers.
And Windows is by large the development platform of choice for any serious gamedev work.
git clone https://github.com/zed-industries/zed
cargo run --release* The framework they use supports X11 and Wayland out of the box, it wasn't as much effort as you'd think.
* They accept contributions.
I realize yall are out there, but from where I'm sitting, this isn't odd at all. They're likely most familiar with and using Unixes.
You know, things like not rerendering the entire UI on the smallest change (including just moving your mouse) without damage reporting.
I have no experience using (current) vscode, but I've used neovim on a daily basis for a couple of years. I think the thing which makes an editor a "better editor" are the small things, things which solve problems which might cause a little friction while using the editor. Having a lot of these little points of friction results in a (for me) annoying experience.
Zed has a lot of these (from the outside) simple issues and I don't see them working on them. Again, I understand that they have to prioritize. But this doesn't result in me feeling comfortable spending time adopting this editor. I'm "scared" that issues like https://github.com/zed-industries/zed/issues/6843 might be very low on the list of work being done and always will be, while the next big (maybe honestly great) feature gets all the attention.
Im not sure what that is, but Im guessing it will be something along the lines of Prolog.
You will basically give it some test cases, and it will write code that passes those test cases.
I just had a many-hour long hacking session with Perplexity to generate a complex code module.
A simple example: Something as simple as the hotkeys for opening or closing the project panel with the file tree isn't consistent and doesn't work all the time.
To be clear: I am excited about this new addition. I understand there's a ton of value in these LLM "companions" for many developers and many use cases, and I know why Zed is adding it...but I really want to see the core editor become bullet proof before they build more features.
I think the focus on speed is great, but I don't feel my IDE's speed has held me back in a decade.
https://news.ycombinator.com/item?id=35947073
When familiar with Aider, it feels as if this Zed.ai post is chasing Paul's remarkably pragmatic ideas for making LLMs adept at codebases, without yet hitting the same depth of repo understanding or bringing automated smart loops to the process.
Watching Aider's "wait, you got that wrong" prompt chains kick in before handing the code back to you is a taste of "AI".
If your IDE is git savvy, then working with Aider in an Aider REPL terminal session with frequent /commits that update your IDE is like pair programming with a junior dev that happens to have read all the man pages, docs wikis, and stackoverflow answers for your project.
I love IntelliJ but it does not start up quickly, which is a problem if I just want to look at a little code snippet.
On another side, I really like the experience of coding with GitHub Copilot. It suggests code directly in your editor without needing to switch tabs or ask separately. It feels much more natural and faster than having to switch tabs and request changes from an AI, which can slow down the coding process.
Don't take it as sarcasm, I am genuinely interested. I think Emacs' malleability is what still keeps it alive.
It's hard for me to understand what text editor itself has to do with LLM completions.
For example, you can do /file *.rs to load all of the rust files in your project into context.
Here is a simple but real example I used a while back:
"/file zed/crates/gpui/src/text_system.rs
I have a font I want to check if it exists on the system. I currently have a &'static str.
Is there something in here that will help me do that?"
I haven't interfaced with the lower level TextSystem that much, so rather than dig through 800 lines of code, I was able to instantly find `is_font_available()` and do what I needed to do.
Brave Browser Windows 10
What's next? Web3 integration? Blockchain?
Zed vs Cursor review anyone?
- transparent assistant panel vs opaque composer. you control your own prompts (cf. [0])
- in Zed the assistant panel is "just another editor", which means you can inline-assist when writing prompts. super underrated feature imo
- Zed's assistant is pretty hackable as well, you can add slash commands via native Zed extensions [1] or non-native, language-agnostic Context Servers [2]
- Zed's /workflow is analogous to Cursor's composer. to be honest it's not quite as good yet, however it's only ~1 week old. we'll catch up in no time :)
- native rust vs electron slop. Zed itself is one of the larger Rust projects out there [3], can be hard to work with in VS Code/Cursor, but speedy in Zed itself :)
[0]: https://hamel.dev/blog/posts/prompt/
[1]: https://zed.dev/docs/extensions/slash-commands
[2]: https://zed.dev/docs/assistant/context-servers
[3]: https://blog.rust-lang.org/inside-rust/2024/08/15/this-devel...
For composer, there's going to be more use of "shadow workspace" https://www.cursor.com/blog/shadow-workspace to create an agentic feedback loop/ objective function for codegen along with an ability to navigate the language server and look up definitions and just generally have full context like an engineer. Are there plans for the same in zed?
Also, cursor has a model agnostic apply model, whereas you all are leaning on claude.
It is really smooth on a Mac with ProMotion.
Cursor is great – We explored an alternate approach to our assistant similar to theirs as well, but in the end we found we wanted to lean into what we think our super power is: Transforming text.
So we leaned into it heavily. Zed's assistant is completely designed around retrieving, editing and managing text to create a "context"[0]. That context can be used to have conversations, similar to any assistant chatbot, but can also be used to power transformations right in your code[1], in your terminal, when writing prompts in the Prompt Library...
The goal is for context to be highly hackable. You can use the /prompt command to create nested prompts, use globs in the /file command to dynamically import files in a context or prompt... We even expose the underlying prompt templates that power things like the inline assistant so you can override them[2].
This approach doesn't give us the _simplest_ or most approachable assistant, but we think it gives us and everyone else the tools to create the assistant experience that is actually useful to them. We try to build the things we want, then share it with everyone else.
TL;DR: Everything is text because text is familiar and it puts you in control.
[0]: https://zed.dev/docs/assistant/contexts.html
[1]: https://zed.dev/docs/assistant/inline-assistant
[2]: https://zed.dev/docs/assistant/prompting#overriding-template...
I posted this above, but want you to see it:
Two areas where I think Zed might fall behind: Cursor Tab is REALLY good and probably requires some finetuning/ML chops and some boutique training data.
For composer, there's going to be more use of "shadow workspace" https://www.cursor.com/blog/shadow-workspace to create an agentic feedback loop/ objective function for codegen, along with an ability to navigate the language server and look up definitions and just generally have full context like an engineer
Also, cursor has a model agnostic apply model, whereas you all are leaning on claude.
Any plans to address this from the core team or more of a community thing? I think some of this might be a heavy lift
I really like the shared context idea, and the transparency and building primitives for an ecosystem
I'm logged in, using Zed Preview, and selecting the model does nothing. In the configuration it says I "must accept the terms of service to use this provider" but I don't see where and how I can do that.
Edit: JetBrains, not IntelliJ. Auto-complete details - https://blog.jetbrains.com/blog/2024/04/04/full-line-code-co...
If I've never used Copilot I might be slightly impressed.
I've disabled Copilot as it produces garbage way too often. I found myself "pausing" to wait for the suggestion that I'd ultimately ignore because it was completely invalid. I've left Jetbrain's code completion on though because it's basically just a mildly "smarter" autocomplete that I'll occasionally use / I don't find myself relying on.
They also focus on single line completions, and ship different models per programming language. All these make it possible to ship a decent completion engine with a very small download size.
I disagree, when I'm writing code and it just autocompletes the line for me with the correct type with the correct type params set for the generics it saves me the mental effort having to scroll around the file to find the exact type signature I needed. For these small edits it's always right if the information is in the file.
https://research.google/blog/ai-in-software-engineering-at-g...
I've been wondering if the benefits of AI-autocomplete are more material to people who work in languages like Python and JavaScript that are harder to provide IDE-based autocomplete for.
If you're a Java or TypeScript developer maybe the impact is reduced because you already have great autocomplete by default.
The normalisation of surrendering all our code to remove AI cloud gods maybe? The other being a super responsive IDE now having major features have network requests delaying them, although HW requirements likely make that faster for most people.
Take the AI out of the conversation: if you told your employer you shared the codebase, that’s an insta-fire kind of move.
I was really looking forward to trying Zed, but this just means I'll stick to VS/Code with the AI gung disabled.
In general, if any product comes with "AI" I'm turned off by it.