I am building Aye Chat, an open-source terminal workspace that integrates AI code generator directly into your shell, allowing you to edit files and run commands as well as prompt AI seamlessly.
The AI writes code directly to your files immediately, eliminating the "review and approve" out of the loop.
At the same time, every AI edit is snapshotted locally, so you can instantly undo any change with a single command. This automatic file update with a safety net is the core idea.
Also, in the same session, you can run shell commands, open Vim, and ask the AI to modify your code, turning it into an AI-powered terminal.
I built this because I got tired of the "suggest -> review -> approve -> changes applied" loop in existing AI coding tools. As models improve and generate proper code more often than not, manual approval started to feel unnecessary as long as there is a strong safety net to allow easy rollback of the changes. Yes, the idea is not exactly groundbreaking: other tools already offer "set a dangerous flag and remember to recover later" settings, but what I am exploring is defaults, not capability. With Aye Chat, the default is the opposite: apply immediately and make undo trivial. There are no flags to remember, no mode switches, and you don't need to exit the tool to roll back.
You can watch a 1-minute demo here: https://youtu.be/h5laV5y4IrM
Basically, the typical workflow goes like this (instead of a chat window, you stay in your terminal):
$ aye chat # starts the session
> fix the bug in server.py
Fixed undefined variable on line 42
> vim server.py
[opens real Vim, returns to chat after]
> refactor: make it async
Updated server.py with async/await
> pytest
Tests fail
> restore
Reverted last changes
I use Aye Chat both in my work projects and to build Aye Chat itself. Recently, I used it to implement a local vector search engine in just a few days.Lower-level technical details that went into the tool:
The snapshot engine is a Python-based implementation that serves as a lightweight version control layer.
For retrieval, we intentionally avoided PyTorch to keep installs lightweight. Instead, we use ChromaDB with ONNXMiniLM-L6_V2 running on onnxruntime.
File indexing runs in the background using a fast coarse pass followed by AST-based refinement.
What I learned:
The key realization was that the bottleneck in AI coding is often the interface, not the model. By shifting to the AI writing code immediately with users still maintaining full control to undo those changes - we eliminate the cognitive load of the review-and-approve loop.
I also learned that early users do not accept a custom snapshot engine, so to make it professional-grade we are now integrating it with git refs.
What I’d love feedback on: - After using it for a while, did replacing approvals with undo actually change how you work, or did it feel no different from existing terminal-based AI tools (GitHub Copilot CLI, Claude Code)? - Was there a moment when this started to feel natural to use, or did it never quite click?
There is a 1-line quick install:
pip install ayechat
Homebrew and Windows installer are also available.Repo: https://github.com/acrotron/aye-chat. If you find it interesting, a repo star would mean a lot!
It's early days, but Aye Chat is working well for me. I would love to get your feedback. Feel free to hop into the Discord (https://discord.gg/ZexraQYH77) and let me know how it goes.
I built this because I got tired of the "suggest -> review -> approve" loop in existing AI coding tools. As models improve and generate proper code more often than not, manual approval started to feel unnecessary as long as there is a strong safety net to allow easy rollback of the changes.
Aye Chat applies changes automatically, but every AI edit is snapshotted locally, so you can instantly undo any change with a single command. This automatic file update with a safety net is the core idea.
In the same session, you can run shell commands, open Vim, and ask the AI to modify your code.
It supports multiple models via OpenRouter, direct OpenAI API usage with your key, and also includes an offline local model (Qwen2.5 Coder 7B).
You can watch a ~1-minute demo here: https://youtu.be/i-vGI6-kP4c
Basically, the typical workflow goes like this (instead of a chat window, you stay in your terminal):
$ aye chat # starts the session
> fix the bug in server.py
Fixed undefined variable on line 42
> vim server.py
[opens real Vim, returns to chat after]
> refactor: make it async
Updated server.py with async/await
> pytest
Tests fail
> restore
Reverted last changes
I use Aye Chat both in my work projects and to build Aye Chat itself. Recently, I used it to implement a local vector search engine in just a few days.Lower-level technical details that went into the tool:
The snapshot engine is a Python-based implementation that serves as a lightweight version control layer.
For retrieval, we intentionally avoided PyTorch to keep installs lightweight. Instead, we use ChromaDB with ONNXMiniLM-L6_V2 running on onnxruntime.
File indexing runs in the background using a fast coarse pass followed by AST-based refinement.
What I learned:
The key realization was that the bottleneck in AI coding is often the interface, not the model.
I also learned that early users do not accept a custom snapshot engine, so to make it professional-grade we are now integrating it with git refs.
What I'd love feedback on:
- Does the snapshot safety net give you enough confidence to let the AI write files directly, or does it still feel too risky?
- Shell integration: does the ability to execute native commands and prompt the AI from a unified terminal interface solve the context-switching problem for you?
There is a 1-line quick install:
pip install ayechat
Homebrew and Windows installer are also available.It's early days, but Aye Chat is working well and is legitimately the tool I reach for first when I want to iterate faster. I would love to get your feedback. Feel free to hop into the Discord (https://discord.gg/ZexraQYH77) and let me know how it goes. If you find it interesting, a repo star would mean a lot!
I am building a terminal-native tool for code generation, and one of the recent updates was to package a local model (Qwen 2.5 Coder 7B, downloads on the first try) for those users who do not want their code uploaded to third-party servers.
Initial response from users to this addition was favorable - but I have my doubts: the model is fairly basic and does not compare in quality to online offerings.
So - I am planning to improve RAG capabilities for building a message with relevant source file chunks, add a planning call, add validation loop, maybe have a multi-sample with re-ranking, etc.: all those techniques that are common and when implemented properly - could improve quality of output.
So - the question: I believe (hope?) that with all those things implemented - 7B can be bumped approximately to quality of a 20B, do you agree that's possible or do you think it would be a wasted effort and that kind of improvement would not happen?
The source is here - give it a star if you like what you see: https://github.com/acrotron/aye-chat
I started building AI code generator with shell integration because did not like the feeling of switching back and forth between my terminal environment and panes where code assistants generated code snippets, and then having to select, copy and paste them in my code.
So now I have a tool where in the same terminal session you can execute shell commands, open editors and edit files, and prompt AI without ever leaving.
My question is: would it be something that others found interesting and useful or is it just me who experiences pains of context switching?
Thanks to all who responds.
Interested to hear from developers working in different environments - all opinions are welcome. Thanks!