How does that work exactly on a lower level, say the current? ASCII text would be decoded to the binary and 1s would be high voltage and 0s would be low? And if there's no data transmitted it would be all low voltage?
Oddly enough I could have answered the OP's question in an interview, 40 years ago, but stuff has gotten so complex that I can't even tell you what all of the layers of abstraction are.
https://en.wikipedia.org/wiki/8250_UART
Though the UART chip I used was years earlier than that. Perhaps the Wikipedia article is wrong on that point.
The only electrical components in a Teletype are (a) a continuously-running electric motor, and (b) an electromagnet and a few switches. Everything else is completely mechanical.
The Teletypes in a circuit, and the electromagnet and switches within them, are all connected in series, forming a current loop. (Current, rather than voltage, is used because you can use a constant-current power supply to get the same power at each electromagnet regardless of how many are in the circuit and how many miles of wire are between them.)
When the circuit is idle, current is flowing. This allows any Teletype to begin transmitting; if it were the other way around, each one would need its own individual line power supply. (Incidentally, this is the origins of the “break” key still found on many modern keyboards: when the circuit was disconnected, nobody could transmit, so “breaking in” to the circuit was how you interrupted somebody, and a lot of early computers used this as a primitive version of ^C.)
When you press a key, a rod is lifted, activating a clutch that connects the motor to the rest of the machine, which starts running. The first thing this does is break the circuit, deactivating the electromagnets in the receiving mechanisms of all the other Teletypes. When this electromagnet deactivates, it trips the clutch in those machines, starting them running so they can receive the character. (This is why it’s called the “start bit.”)
One-tenth of the way through the rotation, a cam in the sending machine switches from the “start bit” to the switch for the first data bit. If the rod on the key you pressed has a bump on it in the right spot, it will press this switch and send a “mark”; if it’s missing the bump you’ll get a “space” instead. On the receiving end, a clever mechanism connects the electromagnet to a lever: if the magnet engages, it pushes the lever one way; and if it doesn’t, the lever stays pushed the other way.
At two-tenths of the rotation, the first data bit switch is disconnected and the second data bit switch engages. This of course reads the second bump on the rod, and at the receiving end the clever mechanism has moved so the magnet pushes (or doesn’t) the second lever.
(Notable exceptions: the ‘ctrl’ key forces the first two bits low—this is why e.g. Ctrl+D and “End of Transmission” are the same—and ‘shift’ forces the first bit high.)
At three-tenths of the rotation, of course, the third bit is read, transmitted and received; this process continues for the remaining data bits.
The final two-tenths of the rotation are known as the “stop bits”, and consist of some signal that doesn’t matter much. The receiving end uses this time to trip the print mechanism: the levers that were pushed-or-not during the data bits engage with some bumps in some other rods, blocking all but one of them that matches the specific pattern, and then a thingie shoves forward, slamming the appropriate type bar up into the ribbon and paper, and as it returns the carriage advances one character. (The reason for using two bits is simply to give some time for all of this to happen.)
Finally, the mechanisms on the sending and receiving ends complete their full rotation, disengaging the clutch and coming to an abrupt halt; the transmitting end reconnects the line so the circuit is available, and everything is ready for the next character. All of this has happened in less than a tenth of a second, a mechanical ballet choreographed too fast for the human eye to see, and transmitting information potentially hundreds of miles at the speed of light.
Basically, it's a capability list though, IIUC.
[12](https://www.warp.dev/blog/what-happens-when-you-open-a-termi...)
ls is not recognized as an internal or external command,
operable program or batch file.
;-)a) Escape sequence to set the "dynamic" terminal title
b) how e.g. GNOME notifies you that a long running command has completed
c) how e.g. GNOME asks if you are certain to close a terminal, but only in case you're not in a shell
d) maybe readline or /etc/inputrc
e) bash completions maybe? it's sort of in there already
A terminal should only do anything when the user types something and presses enter, and then it should only do what the user told it to do. The idea that it goes off to the network unasked is beyond invasive.
But good move. The idea of a terminal tracking my typings and commands is pretty scary stuff, especially if I'm going to use this thing for security sensitive sessions.
Terms are really weird for something I thought was a terminal app, and their commong questions are talking about "cloud-oriented features" which I really don't know and probably don't want.
I'd be happy to pay (even per month) for a version of this that asks you "do you want to use our cloud features or just the local version" and has different pricing and definetely no login.
Note: I didn't end up actually trying this as I really didn't like the sound of not knowing what my terminal is sending out. They do list what they send for telemetry, but not sure what is considered "cloud-features"
To clarify, all cloud-oriented features are fully opt-in. For example, you have to explicitly share a block for us to store it in the cloud.
You can also opt-out of telemetry (which we use to determine feature usage and plan our roadmap FWIW). We even have a network log so you can see every network call we make. More details here: https://www.warp.dev/blog/telemetry-now-optional-in-warp
Due to the corporate software on my work Mac auto-updates fail. Ordinarily this isn’t a huge deal as I’ll just download the DMG and replace it manually. But Warp drops down an obtrusive overlay and you can’t dismiss it until you update.
I tried to like it, but little forced choices like that sent me back to iTerm 2.
I've recently stumbled upon an article on the same topic, but containing competent and accurate information, I have a link because I recommended it to a friend: https://thevaluable.dev/guide-terminal-shell-console/
I enjoyed the blogpost, I trust the authors of a terminal emulator to give me a good overview on the topic and I enjoyed it.
However, something that I have not seen yet and I’d love to check is an attempt to redesign the terminal from scratch separating historical baggage from the parts that still make sense nowadays.
How would a designer create this tool without the historical precedence? Is tradition holding us back? What new standards could we reach ?
While we haven't rebuilt from absolute ground zero, Warp is definitely trying to extend the capabilities of a terminal (emulator) from what's historically been possible. For example, we introduced a dedicated input editor so you can have an IDE-like experience in the terminal. It's fundamentally different from how input is entered in a traditional terminal. But with this innovation, we've had to be careful to ensure that all the input features you expect in a normal terminal (even obscure ones like `alt-.`) work how you'd expect, _and then some_.
Overall though, starting from scratch is hard because we need to stay backwards-compatible with all the CLIs we use everyday.
The UNIX IO system was adapted from Multics.
Why would you even read it after reading the title?