- Emacs (inherited from lisp machines?). A VM which is powered by lisp. The latter make it easy to redefine function, and commands are just annotated functions. As for output, we have the buffer, which can be displayed in windows, which are arranged in a tiling manner in a frame. And you can have several frames. As the buffer in a window as the same grid like basis as the terminal emulator, we can use cli as is, including like a terminal emulator (vterm, eat, ansi-term,...). You can eschew the terminal flow and use the REPL flow instead (shell-mode, eshell,...). There's support for graphics, but not a full 2d context.
- Acme: Kinda similar to emacs, but the whole thing is mostly about interactive text. Meaning any text can be a command. We also have the tiling/and stacking windows things that displays those texts.
I would add Smalltalk to that, but it's more of an IDE than a full computing environment. But to extend it to the latter would still be a lower effort than what is described in the article.
I open a poorly aligned, pixelated PDF scan of a 100+ year old Latin textbook in Emacs, mark a start page, end page, and Emacs lisp code shells out to qpdf to create a new smaller pdf from my page range to /tmp, and then adds the resulting PDF to my LLM context. Then my code calls gptel-request with a custom prompt and I get an async elisp callback with the OCR'd PDF now in Emacs' org-mode format, complete with italics, bold, nicely formatted tables, and with all the right macrons over the vowels, which I toss into a scratch buffer. Now that the chapter from my textbook in a markup format, I can select a word, immediately pop up a Latin-to-English dictionary entry or select a whole sentence to hand to an LLM to analyze with a full grammatical breakdown while I'm doing my homework exercises. This 1970s vintage text editor is also a futuristic language learning platform, it blows my mind.
and all it took was a deep understanding of software development, experience with lisp and a bunch of your own time coding and debugging! what a piece of software!
Got a link to what you meant? This is pretty hard to search for.
> - Emacs
One thing in common with emacs, jupyter, vscode.. these are all capable platforms but not solutions, and if you want to replace your terminal emulator by building on top of them it's doable but doesn't feel very portable.
I'd challenge people that are making cool stuff to show it, and then ship it. Not a pile of config + a constellation of plugins at undeclared versions + a "simple" 12-step process that would-be adopters must copy/paste. That's platform customization, not something that feels like an application. Actually try bundling your cool hack as a docker container or a self-extracting executable of some kind so that it's low-effort reproducible.
http://www.youtube.com/watch?v=dP1xVpMPn8M
> *I'd challenge people that are making cool stuff to show it, and then ship it.
Emacs has the following builtin and more
- Org mode (with babel): Note taking and outliner, authoring, notebooks, agenda, task management, timetracking,...
- Eshell: A shell in lisp, similar to fish, but all the editor commands are available like cli tools.
- comint: All things REPL (sql client, python,...)
- shell-command and shell-mode: The first is for ad-hoc commands, the second is derived from comint and give you the shell in an REPL environment (no TUI).
- term: terminal emulator, when you really want a tui. But the support for escape sequences is limited, so you may want something like `eat` or `vterm`.
- compile: all things build tools. If you have something that report errors and where those errors are located in files, then you can tie it to compile and have fast navigation to those locations.
- flymake: Watch mode for the above. It lets you analyze the current file
- ispell and flyspell: Spell checking
- dired: file management
- grep: Use the output of $grep_like_tool for navigatoin
- gnus and rmail: Everything mail and newsgroup.
- proced: Like top
- docview: View pdf and ps files, although you can probably hack it to display more types.
- tramp: Edit files from anywhere...
And many more from utilities (calc, calendar) and games to low level functions (IPC, network,...) and full blown applications (debugger, MPD client). And a lot of stuff to write text and code thhings. All lisp code, with nice documentation. That's just for the built-in stuff.
If not for the state of the Web, you could probably just went straight from init to Emacs.
It's part of plan9:
Maybe it is an API. Maybe the kernel implements this API and it can be called locally or remotely. Maybe someone invents an OAuth translation layer to UIDs. The API allows syscalls or process invocation. Output is returned in response payload (ofc we have a stream shape too).
Maybe in the future your “terminal” is an app that wraps this API, authenticates you to the server with OAuth, and can take whatever shape pleases you- REPL, TUI, browser-ish, DOOM- like (shoot the enemy corresponding to the syscall you want to make), whatever floats your boat.
Heresy warning. Maybe the inputs and outputs don’t look anything like CLI or stdio text. Maybe we move on from 1000-different DSLs (each CLI’s unique input parameters and output formats) and make inputs and outputs object shaped. Maybe we make the available set of objects, methods and schemas discoverable in the terminal API.
Terminals aren’t a thing of the 80s; they’re a thing of the early 70s when somebody came up with a clever hack to take a mostly dumb device with a CRT and keyboard and hook it to a serial port on a mainframe.
Nowadays we don’t need that at all; old-timers like me like it because it’s familiar but it’s all legacy invented for a world that is no longer relevant. Even boot environments can do better than terminals today.
This is Powershell. It’s a cool idea for sure. One thing I’ve noticed though is that it becomes closer to a programming language and further away from scripting (ie you have to memorize the APIs and object shapes). And at that point, why would you write the program in a worse programming language?
By comparison, I’ve noticed even windows-leaning folks do a better job remembering how to delete files and find files than doing so through cmd.exe or powershell. I think that’s because you can run the command to see the output and then you know the text transformation you need to apply for the next step whereas powershell shows you formatted text but objects in the pipe.
Maybe a better terminal that provided completion for commands with AI support and a uniform way to observe the object shapes instead of formatted text might mitigate this weakness but it is real today at least imho.
Somewhat true. However it's easy to explore what methods and properties are available. Just add `| gm` (Get-Member) to the end of your pipeline to see what you're dealing with and what's available.
Terminals are not "text oriented". They are based on a bidirectional stream of tokens - that can be interpreted as text, or anything else.
That simplicity allows for Unix-style composition. If you make the output something different, then the receiving program will need to be able to parse it. The Amiga OS had some interesting ideas with different data types as system extensions - you'd receive "an image" instead of a JPEG file and you could ask the system to parse it for you. In any case, that's still forcing the receiving program to know what it's receiving.
One way to add some level of complexity is to add JSON output to programs. Then you can push them trough `jq` instead of `grep`, `sed`, or `awk`. Or push it through another tool to make a nice table.
> it’s all legacy invented for a world that is no longer relevant.
I hear that since the Lisa was introduced. Character streams are a pretty common thing today. They are also very useful thanks to their simplicity. Much like Unix, it's an example of the "worse is better" principle. It's simpler, dumber, and, because of that, its uses have evolved over decades with almost no change to the underlying plumbing required - the same tools that worked over serial lines, then multiplexed X.25 channels, then telnet, now work under SSH streams. Apps on both sides only need to know about the token stream.
That's still text. Even PowerShell passes objects between commands.
Plan9 did this correctly. A terminal was just a window which could run graphical applications or textual applications. Locally or remotely. It all worked. You create a window, you get a shell with a text prompt. You can do text stuff all day long. But maybe you want that window to be a file manager, now? Launch vdir, and now that same window is home to a graphical file browser. close that and remote into another plan9 machine. launch doom. it runs. it all just works, and it all works smoothly.
And the entire source code for that OS could fit into one person's brain.
It is a very simple OS, appears (to my layman eye) to have sandboxing between all applications by default (via per-process namespaces) making it very easy to keep one application off of your network while allowing others to talk via network as much as they want, for example.
Entirely agree. Stdio text (which is really just stdio bytes) deeply limits how composable your shell programs can be, since data and its representation are tightly coupled (they're exactly the same). I wrote a smidgin here[0] on my blog, but take a look at this unix vs. PowerShell example I have there. Please look beyond PowerShell's incidental verbosity here and focus more deeply on the profoundly superior composition that you can only have once you get self-describing objects over stdio instead of plain bytes.
$ # the unix way
$ find . -name '*.go' -not -name '*_test.go' -ctime -4 -exec cat {} \; | wc -l
7119
$ # the powershell way
$ pwsh -c 'gci -recurse | where {($_.name -like "*.go") -and ($_.name -notlike "*_test.go") -and ($_.LastWriteTime -gt (get-date).AddDays(-4))} | gc | measure | select -ExpandProperty count'
7119
[0] https://www.cgl.sh/blog/posts/sh.htmlOne key aspect of the Unix way is that the stream is of bytes (often interpreted as characters) with little to no hint as to what's inside it. This way, tools like `grep` and `awk` can be generic and work on anything while others such as `jq` can specialize and work only on a specific data format, and can do more sophisticated manipulation because of that.
The terminal of plan9 was just a window. By default you got a shell with a textual prompt, but you can launch any graphical application in there or any textual application. you can launch a 2nd window manager with its own windows. you can run doom. you can `ls` and `ssh` all you like. it all just works.
this debuted in Plan9 in 1995 or so. 30 years ago we had the terminal of the future and the entire world ignored it for some reason. I'm still a bit mad about it.
It’s like powershell but not ugly and not Microsoft.
I have this feeling with most things that are not the "default", especially when I think of getting new tools adopted into a conservative workplace.
I think because we already have non-text based terminal succesors.
I think there is interest in a succesor to text-bassd because a lot of people both like them but the space has been rather stagnant for a while.
To put it bluntly, what if its nothing like you ever imagined isn't all that interesting as speculation because it doesn't commit to any choices. The proposal has to be imaginable to be interesting.
We've had them for a long time. There have been multiple graphics standards terminals supported - Tektronix, ReGIS, Sixels, up to richer, less successful interfaces (such as AT&T's Blit and its successors - all gorgeous, all failed in the marketplace).
The notebook interface popularized by iPython is an interesting one, but it's not really a replacement for a terminal.
The last thing a command-line terminal needs is a Jupyter Notebook-like UI. It doesn't need to render HTML; it doesn't need rerun and undo/redo; and it definitely doesn't need structured RPC. Many of the mentioned features are already supported by various tooling, yet the author dismisses them because... bugs?
Yes, terminal emulators and shells have a lot of historical baggage that we may consider weird or clunky by today's standards. But many design decisions made 40 years ago are directly related to why some software has stood the test of time, and why we still use it today.
"Modernizing" this usually comes with very high maintenance or compatibility costs. So, let's say you want structured data exchange between programs ala PowerShell, Nushell, etc. Great, now you just need to build and maintain shims for every tool in existence, force your users to use your own custom tools that support these features, and ensure that everything interoperates smoothly. So now instead of creating an open standard that everyone can build within and around of, you've built a closed ecosystem that has to be maintained centrally. And yet the "archaic" unstructured data approach is what allows me to write a script with tools written decades ago interoperating seamlessly with tools written today, without either tool needing to directly support the other, or the shell and terminal needing to be aware of this. It all just works.
I'm not saying that this ecosystem couldn't be improved. But it needs broad community discussion, planning, and support, and not a brain dump from someone who feels inspired by Jupyter Notebooks.
A "barely better" version of something entrenched rarely win (maybe only if the old thing not get updaters).
This is the curse of OpenOffice < MS Office.
This is in fact the major reason:
> Great, now you just need to build and maintain shims for every tool in existence
MOST of that tools are very bad at ux! so inconsistent, weird, arcane that yes, is MADNESS to shim all of them.
Instead, if done from first principles, you can collapse thousands of cli arguments, options, switched and such things in few (btw a good example is jj vs git).
This is how could be: Adopt an algebra similar to the relational model, and a standardized set of most things millions of little tools have (like the commands help, sort, colors, input/output formats, etc) and then suddenly you have a more tractable solution.
ONLY when a tool is a total game changer people will switch.
And what about all the other stuff? In FoxPro (that in some ways show the idea you just preen `!` and then run the shell command you need. That is enough (editors and such? Much better to redo in the new way, and everyone knows that vim and emacs fan never change ways)
Yes, this is the work. https://becca.ooo/blog/vertical-integration/
Yes, you effectively are, and the current unstructured buggy mess is "just works" for you.
> But it needs broad community discussion, planning, and support,
Where was this when all the historic mistakes were made? And why would fixing them suddenly needs to overcome this extra barrier?
- https://arcan-fe.com/ which introduces a new protocol for TUI applications, which leads to better interactions across the different layers (hard to describe! but the website has nice videos and explanations of what is made possible)
- Shelter, a shell with reproducible operations and git-like branches of the filesystem https://patrick.sirref.org/shelter/index.xml
it "cheats" a little because it requires the underlying filesystem to support snapshots but it's still really really cool, thank you for the link!
Maintaining a high level of backwards compatibility while improving the user experience is critical. Or at least to me. For example, my #1 fristration with neovim, is the change to ! not just swapping the alt screen back to the default and letting me see and run what I was doing outside of it.
We generally like the terminal because, unlike GUIs it's super easy to turn a workflow into a script, a manual process into an automated process. Everything is reproducible, and everything is ripgrep-able. It's all right there at your fingertips.
I fell in love with computers twice, once when I got my first one, and again when I learned to use the terminal.
An article called "A Spreadsheet and a Debugger walk into a Shell" [0] by Bjorn (letoram) is a good showcase of an alternative to cells in a Jupyter notebook (Excel like cells!). Another alternative a bit more similar to Jupyter that also runs on Arcan is Pipeworld.
[0] https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger... [1] https://arcan-fe.com/2021/04/12/introducing-pipeworld/
PS: I hang out at Arcan's Discord Server, you are welcome to join https://discord.com/invite/sdNzrgXMn7
It is very hard to explain Arcan but I tried:
https://www.theregister.com/2022/10/25/lashcat9_linux_ui/
I talked to Bjorn Stahl quite a bit before writing it, but he is so smart he seems to me to find it hard to talk down to mere mortals. There's a pretty good interview with him on Lobsters:
https://lobste.rs/s/w3zkxx/lobsters_interview_with_bjorn_sta...
You really should talk to him. Together you two could do amazing things. But IMHO let Jupyter go. There's a lot more to life than Python. :-)
There's even more under the "Updates archive" expando in that post.
It was a pretty compelling prototype. But after I played with Polyglot Notebooks[1], I pretty much just abandoned that experiment. There's a _lot_ of UI that needs to be written to build a notebook-like experience. But the Polyglot notebooks took care of that by just converting the commandline backend to a jupyter kernel.
I've been writing more and more script-like experiments in those ever since. Just seems so much more natural to have a big-ol doc full of notes, that just so happens to also have play buttons to Do The Thing.
[1]: https://marketplace.visualstudio.com/items?itemName=ms-dotne...
But just showing a browser like Jupyter would be very useful. It can handle a wide variety of media, can easily show JS heavy webpages unlike curl, and with text option to show text based result like w3m but can handle JS, it will be more useful.
browser google.com/maps # show google map and use interactively
browser google.com/search?q=cat&udm=2 # show google image result
browser --text jsheavy.com | grep -C 10 keyword # show content around keyword but can handle JS
vim =(browser --text news.ycombinator.com/item?id=45890186) # show Hacker News article and can edit text result directly)That is typically not the job of terminals, but of programs. fbi, omxplayer, etc exist.
Why? Well one reason is escape sequences are really limited and messy. This would enable everyone to gradually and backward-compatibly transition to a more modern alternative. Once you have a JSON-RPC channel, the two ends can use it to negotiate what specific features they support. It would be leveraging patterns already popular with LSP, MCP, etc. And it would be mostly in userspace, only a small kernel enhancement would be required (the kernel doesn’t have to actually understand these JSON-RPC messages just offer a side channel to convey them).
I suppose you could do it without any kernel change if you just put a Unix domain socket in an environment variable: but that would be more fragile, some process will end up with your pty but missing the environment variable or vice versa
Actually I’d add this out-of-band JSON-RPC feature to pipes too, so if I run “foo | bar”, foo and bar can potentially engage in content/feature negotiation with each other
No need for content/feature negotiations.. machine readable just defaults to JSON unless there's a --format flag for something else. And if you add that on the generation-side of the pipe, you just need to remember to put it on the consumer-side.
There are problems with using JSON for this; other formats would be better. JSON needs escaping, cannot effectively transfer binary data (other than encoding as hex or base64), cannot use character sets other than Unicode, etc. People think JSON is good, but it isn't.
Also, you might want to use less or other programs for the text output, which might be the primary output that you might also want to pipe to other programs, redirect to a file (or printer), etc. This text might be separate from the status messages (which would be sent to stderr; these status messages are not necessarily errors, although they might be). If you use --help deliberately then the help message is the primary message, not a status message.
(In a new operating system design it could be improved, but even then, JSON is not the format for this; a binary format would be better (possibly DER, or SDSER, which is a variant of DER that supports streaming, in a (in my opinion) better way than CER and BER does).)
(Another possibility might be to add another file descriptor for structured data, and then use an environment variable to indicate its presence. However, this just adds to the messiness of it a little bit, and requires a bit more work to use it with the standard command shells.)
With lisp REPLs one types in the IDE/editor having full highlighting, completions and code intelligence. Then code is sent to REPL process for evaluation. For example Clojure has great REPL tooling.
A variation of REPL is the REBL (Read-Eval-Browse Loop) concept, where instead of the output being simply printed as text, it is treated as values that can be visualized and browsed using graphical viewers.
Existing editors can already cover the runbooks use case pretty well. Those can be just markdown files with key bindings to send code blocks to shell process for evaluation. It works great with instructions in markdown READMEs.
The main missing feature editor-centric command like workflow I can imagine is the history search. It could be interesting to see if it would be enough to add shell history as a completion source. Or perhaps have shell LSP server to provide history and other completions that could work across editors?
Atuin runbooks (mentioned in the article) do this! Pretty much anywhere we allow users to start typing a shell command we feed shell history into the editor
Fish shell does this too
My biggest gripe with it is that it quickly ends up becoming an actual production workload, and it is not simple to “deploy” and “run” it in an ops way.
Lots of local/project specific stuff like hardcoded machine paths from developers or implicit environments.
Yes, I know it can be done right, but it makes it sooooooooo easy to do it wrong.
I think I can’t not see it as some scratchpad for ad-hoc stuff.
Its flexibility is beyond imagination. Programs can emit anything from simple numbers/vectors/matrices to medias (image, sound, video, either loaded or generated) to interactive programs, all of which can be embedded into the notebook. You can also manipulate every input and output code blocks programmatically, because it's Lisp, and can even programmatically generate notebooks. It can also do typesetting and generate presentation/PDF/HTML from notebooks.
What people have been doing w/ Markdown and Jupyter in recent years has been available in Mathematica since (at least) 1-2 decades ago. FOSS solutions still fall short, because they rely on static languages (relative to Lisp, of course).
I mean, really, it's a technological marble. It's just that it's barred behind an high price tag and limited to low core counts.
Independent of the rest, I would love for more terminal emulators to support OSC 133.
Some lesson must surely be drawn from this about incremental adoption.
Missing out on inline images and megabytes of true-color CSI codes is a feature, not a bug, when bandwidth is limited.
If you want jupyter, we have jupyter. If you want HTML, we have several browsers. If you want something else, make it, but please don’t use vt220 codes and call it a terminal.
The article is just wish-listing more NIH barbarism to break things with. RedHat would hire this guy in a heartbeat.
It's especially important for retro and embeded computing. Legacy systems as well (banks, telecoms etc)
That's why most teleco hardware still runs telnet client haha (RRUS AND BBUS) (over IPsec) hehe
one of the strange things to me about the terminal landscape is how little knowledge sharing there is compared to other domains i'm familiar with. iTerm has a bunch of things no one else has; kitty influenced wezterm but otherwise no one else seems to have valued reflection; there's a whole bunch of extensions to ANSI escapes but most of them are non-standard and mutually incompatible. it's weird. if i compare to something like build systems, there's a lot more cross-pollination of ideas there.
This is why I wrote this:
https://www.theregister.com/2025/06/24/tiling_multiplexers_s...
Trying to bring a bunch of related tools together in one place and compare and contrast them.
https://github.com/Julien-cpsn/desktop-tui
It is incomplete but takes what is almost a side aspect of TWIN and runs with it.
https://github.com/cosmos72/twin
TWIN is nearly 20 now and does quite a lot. It even has a Wikipedia page.
https://en.wikipedia.org/wiki/Twin_(windowing_system)
It runs on lots more OSes than just Linux.
When using tools that can emit 0 to millions of lines of output, performance seems like table-stakes for a professional tool.
I'm happy to see people experiment with the form, but to be fit for purpose I suspect the features a shell or terminal can support should work backwards from benchmarks and human testing to understand how much headroom they have on the kind of hardware they'd like to support and which features fit inside it.
Rid us of the text-only terminal baggage that we deal with today. Even graphics are encoded as text, sent to the terminal, then decoded and dealt with.
Plan9 had the terminal right. It wasn't really a terminal, it was just a window which had a text prompt by default. It could run (and display!) graphical applications just as easily as textual applications.
If you want a terminal of the future, stop embracing terminals of the past.
I don't know, but I assume so. Also I can't think of a use case for this but that's just my lack of imagination, I suspect.
It ticks some of the boxes, but tonnes of work would be needed to turn it into a full alternative.
The web solves problems that are almost impossible to properly solve with a terminal, particularly with rendering of more complicated languages and display and interaction with sophisticated visualisations.
Pushing the terminal further while maintaining compatibility, performance and avoiding a terminal war with incompatible protocols is going to be a struggle.
Unless someone creates a cross-platform, open source, modern and standards compliant terminal engine [1].
Don't get me wrong, I'd be quite interested in a vintage computing discussion on the evolution of VT-100/220 etc terminal protocols. There were some interesting things done into the 90s. That's actually what I clicked in expecting. Of course, those were all supplanted by either XWindows (which I never got to use much) or eventually HTML/CSS. And if we're talking more broadly about structured page description languages, there's no shortage of alternatives from NAPLPS to Display Postscript.
The terminal never left.
Terminal emulators display grids of characters using all sorts of horrifying protocols.
Web browsers display html generated by other programs.
Any solution has to address this use case first, IMO. There are some design constraints here, like:
- I don't care about video game levels of graphics - I generally want things to feel local, as opposed to say some cloud GUI - byte stream model: probably bad? But how would I do better?
as just a few examples I thought of in 10 seconds; there's probably way more.
I've thought about the author's exact complaints for months, as an avid tmux/neovim user, but the ability to interact with system primitives on a machine that I own and understand is important.
But hey, those statements are design constraints too - modern machines are tied somewhat to unix, but not really. Sysadmin stuff? Got standardized into things like systemd, so maybe it's a bit easier.
So it's not just a cynical mess of "everything is shit, so let's stick to terminals!" but I'd like to see more of actually considering the underlying systems you are operating on, fundamentally, rather than immidiately jumping to sort of, "how do we design the best terminal" (effectively UI)? The actual workflow of being a systems plumber happens to be aided very well by tmux and vim :)
(And to be fair, I only make this critique because I had this vague feeling for a while about this design space, but couldn't formalize it until I read this article).
https://commons.wikimedia.org/wiki/File:DEC_VT100_terminal.j...
I may disappoint you with the fact that IBM PC-compatible computers have replaced devices of that class. We can only observe certain terminal emulators in some operating systems. There have been many attempts to expand the functionality of these emulators. However, most features beyond the capabilities of VT100 have not caught on (except UTF-8 support). I do not believe that anything will change in the foreseeable future.
You are better off maintaining what already works. Either way why do you want to migrate when things are just working fine as is?