Citation: https://archive.org/details/1991-proceedings-tech-conference... but note that the explanation of stream pools there is a little less precise and more general than really necessary. I believe that later versions of sfio simplified things somewhat, though I could be wrong. (I find their code fairly hard to read.)
Anyhow, ISTM a missed opportunity when new languages that don't actually use libc's routines for something reinvent POSIX's clunkier aspects.
> The FIXME comment shows the Rust team acknowledges that ideally they should check if something is executed in TTYs or not and use LineWriter or BufWriter accordingly, but I guess this was not on their priority list.
This does not inspire confidence.
/// The returned handle has no external synchronization or buffering layered on top.
const fn stdout_raw() -> StdoutRaw;fwrite only buffers because write is slow.
make it so write isn't slow and you don't need userspace buffering!
Buffering should basically always be: “Work or Time” based, either you buffered enough or enough time has passed. This is because you buffer when per-element latency starts bottlenecking your throughput.
If you have so little data that your throughput is not getting limited, then you should be flushing.
I think the core question is whether some middle layer of output processing between program and sink/display could be created that knows enough about (using terminals as an example sink) raw mode/console dimensions/buffering to make most programs display correctly enough for most users without knowing specifics about the program writing the output's internals. If that can be done, then programs that need more specifics (e.g. complex animated/ncurses GUIs) could either propose overrides/settings to the output middleware or configure it directly, and programs that don't wouldn't.
That's possible to implement, sure, but can that be done without just reinventing the POSIX terminal API, or any one of the bad multiplatform-simple-GUI APIs, badly?