This is just one example of where the old ways aren't necessarily the best. I'm fairly confident we could come up with something better than POSIX if we were willing to make the effort.
please, no. OOP approach means that my process needs to know how to communicate with other processes via very specific protocols, which aren't very well defined (any process could describe its type of object). This is the exact opposite of flexibility. The usefullness of tools like grep or sed would drop drastically, and we would fall back to big blobs of software.
Since I'm just passing data, do I really need the behavior attached to it? How would state persistence be handled?
The text (bytes as ASCII until line ending) approach may seem ugly and dirty, but in fact you can pass list, tuples, maps, trees or just text and is up to the receiver the responsibility to make sense out of the data.
Need data from A but B can't understand it? Use C (which operate on text) to format A's output as B needs. With object how many "translator" (C in the example) would you need to acheive the same result?
It's not as hard as you make out. Take a look at how PowerShell works for an idea about how a OOP-based approach can work for CLIs.
http://www.computerworld.com/article/2954261/data-center/und...
That's why we have IDL -- interface description language for RPC calls. An IDL-to-X (usually C) compiler generates the necessary glue so that anybody can talk to the program in question. IDL is also the basis of MS COM, which I quite like from the design standpoint.
> The usefullness of tools like grep or sed would drop drastically, and we would fall back to big blobs of software.
So you teach grep to take an IDL file, invoke an IDL compiler and dynamically load the parser for the protocol in question. Also, if the broker were a standardized, perhaps in-kernel component (dbus, kdbus), you could attach "idlgrep" to any process to trace its calls. You wouldn't be restricted to pipes.
> Since I'm just passing data, do I really need the behavior attached to it? How would state persistence be handled?
The parent's wording was a bit unfortunate. You can have interfaces and interface inheritance and versioning (the "OOP" part), but there's no behavior send between processes.
> With object how many "translator" (C in the example) would you need to acheive the same result?
Exactly one: the IDL compiler.
So you introduce a huge load of accidental complexity because the text interface has some perceived inefficiency? I'll choose simplicity and accessibility over this mess anytime.
>>and is up to the receiver the responsibility to make sense out of the data
That is exactly the same thing.
Either way you're throwing bytes from one process to another and hoping the second one can do something useful with it.
In a world where things are "fixed" by just blowing away a VM and spinning up a new one, you might not care but in that case why bother to log anything at all...?
This comes down to a failure of the binary design. It's possible to design a binary format that is uniform enough to handle truncation of a few bytes. To give you one example, you have a header for the binary with pointers to the start and end of each data block and you can keep multiple copies of this header. Another alternative is to specify that each block needs to specify where the data in that block ends. That way, even if you have partial corruption you can read the uncorrupted data blocks.
POSIX is not Unix.
Passing data around as text is a tenet of the Unix philosophy, while AFAIK POSIX doesn't mandate any of that.
http://pubs.opengroup.org/onlinepubs/9699919799/utilities/co...
"This is just one example of where the old ways aren't necessarily the best."
The lowest hanging fruit gets picked. "Lets dump absolutely everything and NIH the whole thing for a single time 5% performance increase" doesn't sell well when that one time gain is expressed in the amount of time it takes hardware or network capacity to improve 5%, or fixing poorly scaling algos. Also insert the usual analogy of the ratio of the cost of microscopically faster hardware vs the labor cost of extremely expensive rockstar ninja programmers.
Also its assumed that change will lead to improvement because anecdote, or because change is always good. However, "the thing that won uses text, so naturally we gotta get rid of text" doesn't sound like a wise plan.
I'd argue the Unix command line ecosystem has only 'won' in the sense that many developers are familiar with it. I don't think it is technically the best we could do if we were starting from a blank slate.
OO on its own is slower to parse than plain text, but binary is faster to parse than plain text. If you combine the both can you get the best of both words, something that's both reliable and fast.
As one silly example, you can consider Google's protocol buffers. They are not particularly OO-y. (I have great sympathies for functional languages, but I don't think they offer too much insight into how to format your data files. And neither does OO?)