Best of luck to them. Another interesting shell to check out is elvish, lots of new ideas there (even if awkward to use).
(Disclosure: I’m one of the core fish devs/maintainers. Edit: The entire team is awesome and the others deserve virtually all the credit!)
However I don't see it being used AT ALL for the cloud/linux use case? Those are the cases where you tend to get 1000+ lines of shell scripts.
For example, I mention Kubernetes/Docker/Chef, and I've never seen fish in that space.
I also don't know of any Linux distro that uses fish as their foundation -- they all appear to use a POSIX shell like bash/dash/busybox ash. fish is "on top".
See Success with Aboriginal, Alpine, and Debian http://www.oilshell.org/blog/2018/01/15.html -- these distros are built with thousands of lines of shell scripts.
Either 1) I don't know about such usage, 2) people don't know that fish can be used this way, or 3) there is some problem with using fish this way.
I link to this post in the FAQ, which I think is a lot closer to Oil:
https://ilya-sher.org/2017/07/07/why-next-generation-shell/
It's basically the "devops" use case. (And as I mention the main difference between Oil and NGS is that Oil is compatible / has an upgrade path from bash.)
If you want to switch your existing distro scripts to fish, you need to rewrite a lot of stuff. If you want to start a new distro, "it's written in fish!" isn't a terribly compelling selling point, since I don't think anyone is actively picking distros based on their tooling language.
For the general devops case, there's a lot of existing example code out there for bash scripts, and far less for fish. "How do I do X in bash?" is probably going to get you a decent example on Stack Overflow. Devops (I feel) cares a bit more about the installed-everywhere thing, and fish isn't a standard part of most distros.
Oil having a goal of being completely sh/bash compatible gives it much greater odds of something like Debian switching to it, since it wouldn't carry huge technical debt along with it.
Now, I quite seriously believe that a 1000 line shell script only exists out of error. I still occasionally end up doing 2-300 line dense shell scripts, but not without feeling very dirty along the way. Either split into small, simple shell scripts (which is fine), or a different language.
In the cross platform build pipeline at work, I keep a strong discipline when it comes to scripts: They must be short (<=100 lines), and if their complexity exceeds a certain threshold (parsing more than a "grep | cut" here and there, total program states exceeding some low number), then a shell script is no longer acceptable regardless of length. And, well, it's not safe to assume the presence of anything more than a shell.
If you are writing and dealing with 1000+ lines of shell scripts, then experience tells me that you are shooting yourself in the foot. With a gatling gun.
(I used fish, btw. The interactive experience was nice, but the syntax just felt different without much gain, which was frustrating to someone who often writes inline oneliners. Unlearning bash-isms is not a liberty I can afford, as I need to be proficient when I SSH into a machine I do not own or control. I can't force the entire company to install fish on all our lab servers, nor is it okay to install a shell on another persons' dev machine just because we need to coorperate.)
> 1. People who use shell to type a few commands here and there.
> 2. People who write scripts, which may get into the hundreds or even thousands of lines.
> Oil is aimed at group 2. If you're in group 1, there's admittedly no reason to use it right now.
From my perspective, fish is basically the opposite of oil in that it mainly targets group 1 (admittedly I'm biased in that I rarely write shell scripts at all). IME fish scripts as a direct replacement for bash scripts are pretty rare, but that doesn't mean it's not a success if it's aimed at interactive use.
Second, I was recently chided by another fish user for writing fish scripts: "you're using it wrong" was the sentiment. The argument was it's a shell intended to be interactive, but extendable. I don't know how pervasive this line of thinking is, but who's going to learn all about their shell if they're not supposed to write scripts in its language?
Do you really need 1000+ lines of shell scripts? I almost have 20 years with shell scripting and I still have the rule of not writing longer than 1-2 page shell scripts and use something else for longer projects or break it down to much smaller chunks and invoke the parts with a main.sh.
Fish is a for making software engineering and system administration easier by having functionality like command completion (without tab) and it works like a charm. I would never ever would nother to rewrite my bash scripts in fish simply because bash is on all the servers and fish is on none by default.
I am not sure what you are trying to solve here but good luck.
The two shells target different use cases. Hopefully they are both successful in attracting a good audience.
But is this project really overlapping it? I see mostly fish as "UX" centered, while oilshell - as far as it's stated here - reminds me more of powershell (I found the idea of having a modern language behind it so cool when it was released, too bad it wasn't my ecosystem): looks like oilshell is targetting scripting more than UX.
Anyway, I love the idea of having several shells who try to have a new look on what a shell is.
EDIT: oh, btw, let's not consider no significant scripting can be done with bash, just look at the incredible work from dokku team ;) https://github.com/dokku/dokku
As for the scripting language part, however, I have always been wondering why do people use headache languages like bash/sh when we have Python. Might anybody have a clue - I'd appreciate if you could share.
That is, Fish intends to be a useful interactive shell and if it also scriptable, that because you need scripting for it to be useful. Fish doesn't make a serious attempt being the language in which system scripts are programmed in.
Oil on the other hand is a concerted effort to formalise and rigorously implement the language which existing system scripts are written in. The author believes that is a starting point for a good interactive shell -- but programming comes first.
The goals of mine are akin to Fish in terms of REPL use but with a greater emphasis on scripting.
Like Fish, murex also does man page parsing (in fact I wrote mine before realising Fish did the same), but unlike fish autocompletions can be defined by a flat JSON file (much like Terraform) as well as dynamically with code.
Currently I'm working on murexes event system so you can have the shell trigger code upon events like file system changes.
My ultimate aim is to make murex a goto tool systems administration tool - if just for myself - but the project is still young
I'll have to give yours a spin too. I'm glad there's a lot of shell innovation right now. I'm all for breaking posix shell standards and creating things that are way more usable. Fish's prompt customization, functions, highlighting, completion and searching are pretty amazing.
I realize a lot of these little projects will come and go. No matter what, they're great learning tools for the creators/developers, exploring what it takes to make an interactive shell.
Still, I hope we see more stuff like fish come out (and make no mistake, fish took a lot of years and a lot of devs. In the early days my instance would crash every once in a while in ways I couldn't easily reproduce). It's great that we're finally getting away from the traditional bash/zsh/ksh stuff and into newer shells that make coding and navigation easier.
Elvish is pretty nifty, but the biggest failing point to me is that the fancy rich pipelines really work just for in-process stuff, which for me kinda loses the point of being a shell. Of course I do realize that rich (polyglot) interprocess pipelines is a difficult problem; some might say a pipedream.
... | to-json | your-command | from-json | ...
It is also trivial to wrap it into a function like: fn f { to-json | your-command | from-json }It's a challenge for me to use well; not sure all that richness is composable. Better programmers than I am would know.
The feature I use the most is automatic history search by typing part of a command and hitting UP to search the history. I also find the scripting language more straight-forward.
Doesn't bash have that same feature? Or is there some subtle difference between what you're describing and what bash does?
This is available in bash/sh by setting this in your ~/.inputrc file:
"\e[A": history-search-backward
"\e[B": history-search-forward
"\e[C": forward-char
"\e[D": backward-charWhat's the process for requesting functions be added to core? I had to write my own to get bash's 'dirs -v' functionality. My solution depends on sed and is no doubt a hack.
The only time I run into issues is when a command expects manipulate environment variables via bash syntax.
I think the fish documentation WRT to scripting could be much better, but the language is more elegant than bash or PowerShell IMHO.
I did use fish a bit as a script language, but I decided for anything of any size I much prefer Julia. For typical file system navigation, fish is better, but Julia is actually pretty decent as a shell, despite being a real language. So writing shell scripts in it is pretty nice.
In the beginning I wrote separate programs executed from fish shell. But now I just fire up Julia as a shell and run functions directly there interactively.
It sort of is an "uncanny valley" for a text interface. It feels close enough to a traditional UNIX shell that I start to interact with it like one... but has enough differences that I found myself constantly tripping over them.
And in my experience 90% of those are in the form `FOO=bar command` which can be replaced with `env FOO=bar command` and works just fine in fish.
eval $(ssh-agent)
Sure you can get addons (like bass[0]) that will translate the sh environment variable settings to fish, but it’s a pain to have to do that (and remember wth it was called).I use Fish as my shell, but I do scripting with Bash. There’s nothing that prevents you from doing so unless you’re sourcing a file.
Not that the POSIX-like shell syntax is not all sorts of clunky and odd but I almost consider it a feature, it's a deterrent to force you to move to a "real" scripting language when the concepts become too complex to express in a shell script.
Excellently put. POSIX shell languages have fantastic capabilities you just can't get in most other languages. I would love to see a more safe, more sane shell language gain enough popularity to change the narrative that "shell scripts are dangerous and impossible to maintain."
The contrasts to Python and Ruby made me think of xonsh[1], a Python-based shell that can dynamically switch between Bash-like syntax and standard Python. It's not quite ready to become my daily driver, but I'm still excited about it.
[1]: https://xon.sh
https://github.com/oilshell/oil/wiki/ExternalResources
I guess what you mean by embedded is that it should be an embedded DSL in a full-fledged programming language? I don't quite agree, since there are at least 20 projects like that on the wiki page, none of which is popular.
Probably the most popular one is eshell, in Emacs Lisp?
But if there's something I don't know about I'd be interested in hearing it. This idea goes back at least 20 years, e.g. to scsh. And it hasn't taken off.
But certainly I don't begrudge anyone if their favorite environment is Racket and they want to stay in Racket. That's a perfectly reasonable thing. It's just not what I would expect anyone besides racket users to use.
One reason I'm interested in shell because it's the lowest common denominator between different groups of programmers. C/C++ programmers, use it heavily, Python, Ruby, JS programmers, Go, etc. Everybody uses it.
And of course I agree with that! That's the whole reason for Oil.
> concurrent processes
Job control is disabled in shell scripts, and your only other option is juggling process ids. Combined with nearly nonexistent exception handling using anything more than a single process at a time is like pulling teeth.
Things like ssh have super awkward workarounds using control pipes.
I'm not sure I've ever seen a shell script use concurrent processes in the wild.
Python's subprocess library is excellent and makes concurrent processes a breeze.
> file system
With some minor exceptions, I can't think of any FS ops I'd do in shell that isn't just a couple letters longer in Python and 10 times more flexible.
The only reason to use shell is that it has a simple syntax for pipelines.
I rewrite any script > 2 lines in Python and have no regrets.
For all sorts of interactive stuff I use fish, because it works the way you want for the most common tasks. I can use it to really quickly match and get back previous statements or do a completion.
Also much easier to configure and grok than bash, because it has saner syntax and is a simpler shell language.
However when writing shell scripts I use a real language like Julia. It integrates very well with the shell world so I don't find it problematic to do this. It is very easy to read output from processes and pipe stuff. Much nicer than say Python or Ruby.
You got built in syntax to deal with shell stuff, but then didn't make it crazy so you end up with a mess like perl. Julia is actually a very clean and nice language. Which also happens to blistering fast and have LISP style macros.
In an effort to understand the reasons for actively choosing against 3, does anyone know what problems those would be?
tl;dr I used Python for prototyping; it will be removed.
Consider it an implementation detail -- building and running Oil does not require Python, as a portion of the interpreter is bundled in the tarball. Python 2 vs. 3 doesn't really matter. It was in Python 3 at one point.
More discussions here:
https://www.reddit.com/r/ProgrammingLanguages/comments/7tu30...
> I encountered a nice blog post, Replacing Shell Scripts with Python, which, in my opinion, inadvertently proves the opposite point. The Python version is more difficult to write and maintain.
Here's the link: https://medium.com/capital-one-developers/bashing-the-bash-r...
I think, roughly speaking, the fact that Python 3 is much closer to a sane language for engineering means that it's less suited for scripting.
What is A=$(cmd) in bash is an absolute pain in python.
def _call(cmd_str): return subprocess.check_output(shlex.split(cmd_str)).decode("utf-8")
Definitely a verbose monstrosity compared to doing it in bash, but more than worth avoiding the garbage fire that is Bash. And it only works for simple cases
They work well, if a bit magically, until you need to background a process.
Xonsh is an awesome fishy she'll that's a Python superset. $() is built right in.
_("ls -la") | _(lambda x: x.split()[-1]) | _(lambda x: os.path.splitext(x)[1])
I wonder if there's a complete version of something like this out there. You can probably get pretty far staying in Python-land, plus, everything else is free (data types, standard library, adoption, etc).A = !cmd
> /bin/bash your_script.sh
And if your script is written with bash in mind, use a shebang:
> #! /bin/bash
And it will work perfectly fine on fish. As long as I have a bash binary, why do I need COMPATIBILITY?
> CC=gcc make
And if your program is written with gcc in mind, put that in the Makefile:
> CC ?= gcc
And it will work perfectly fine on LLVM systems. As long as I have a gcc binary, why do I need COMPATIBILITY?
----
A big part of the long-term objectives of OSH is that it provides a way to move to a better language, without having to entirely rewrite your codebase. Think of it in a similar spot to C++ (originally); a big part of the design is that your old C (bash) code is already valid C++ (osh), and you can start using new C++ (osh) features wherever you see fit in the codebase, without having to rewrite anything first.
Having bash compatibility is a quality of life feature, no need to debug all the werid cases where it breaks for purely syntax reasons.
zsh and fish don't really belong in the same comparison IMO.
zsh is, like bash, a ksh-like shell with a Bourne-style grammar. Obviously it depends on the exact use case, but in practice, for basic scripting purposes, it is almost a super-set of bash, and it provides several emulation options (like sh_word_split) specifically designed to increase compatibility with POSIX and with bash in particular. It even provides shims to support bash completion functions. (It is fair to point out that, even with all the emulation stuff enabled, it's still not completely bash-compatible, nor POSIX-compliant. It's close enough that the changes required are usually extremely trivial, though.)
fish on the other hand has its own completely different grammar and makes no attempt to provide POSIX/ksh/bash compatibility at all.
Zsh isn't a new kid on the block. Both it and bash are actually about the same age: 28 years:
e: sorry some connection interruption and now this is redundant to other comments.
But of course your argument has the fundamental flaw that the Bourne Again, Korn, and even Bourne shells were not compatible with their various predecessors, but that turned out to be not as problematic in practice as you paint it to be. And we've had all sorts of things gaining "real adoption" over the years, from Norton Commander clones to Perl. The world is nowhere near as narrow as you think.
https://en.wikipedia.org/wiki/Shebang_(Unix)
I used fish for years, never had a problem with bash or zsh scripts.
Im most excited about Elvish shell and the language that's being developed around it. The shell is built with Go and feels super fast compared to my plugin-heavy ZSH. The language design is quite nice too but still very alpha. Looking forward to see what it evolves into...
https://stackoverflow.com/questions/356100/how-to-wait-in-ba...
So whenever you want to do things in parallel there is probably a limit to the number of processes you would like to execute in parallel (e.g. the famous compiler limit formula: number of CPU cores +1). It would be great if Oil could support such a use-case out of the box, as easy parallelism without the ability to artificially limit the number of parallel executions is often useless.
I use xargs -P all over the Oil codebase, which does what you want. The trick is to end the file with "$@", and then invoke xargs -P 4 -- $0 my-func.
That way xargs can run arbitrary shell functions, not just something like sh -c "..." ! I'm going to write a blog post about this. I also do this with find -exec $0 myfunc ';'
https://github.com/oilshell/oil/blob/master/test/spec-runner...
However I think Oil will have something built-in to make this parallel process more friendly. I will probably implement xargs so it can run my own shell scripts without GNU xargs, but then add a nicer syntax. (Probably "each", since that's what xargs really does.)
This gets into your other question about standard utils, which I'll answer now. Short answer: yes I would like something like that, it's just a matter of development time and priorities. I agree with the problem you point out.
So I ended up using the loop syntax:
for i in {0..9}; do
echo "$i" &
done
wait
It is not so Unix like, but I find it easier to debug. It would be great if Oil would have a solution for limiting that kind of parallel execution too. I am aware that this isn't simple as there are different options here how to implement it (global limit vs. local limit vs. named limit).Just an idea from the top of my head an idea for an optional named limit:
Lets call it 'flow':
flow [options] [command]':
-n number of max parallel processes
-c (optional) identifier of the counter.
Example:
for i in {0..9}; do
flow -c myCounter -n 4 echo "$i"
done
Just an idea.Learn from history. In the 1980s the world improved on the terminal paradigms, with TUIs that included things like directly addressable output buffers and unified and standardized keyboard/mouse input event streams. In parallel, GUIs took hold, and there are nowadays a lot of GUI programs in the world.
I have been keeping a wiki page:
https://github.com/oilshell/oil/wiki/Interactive-Shell
Although honestly I won't get to any of this in the near future.
Dead project according to the author, though.
I think the modern "terminal" is the Browser. With URL's instead of file paths. But I think it's maybe time for something new. In the 70's we got the terminal. 20 years later we got the browser. Now another 20 years have passed. What's the next step ? Terminal -> Browser -> ?? -> AI ?
I mean, they can't keep using Bash 3 forever, right? (Hope)
http://penguindreams.org/blog/the-philosophy-of-open-source-...
Maybe they just expect you to install your own shell? I think a lot of people do that with homebrew?
I think Apple has largely expunged shell scripts from the startup process with launchd too? That is like their systemd.
Screen tiling and visual 'tabs' would also be welcome additions. Not everyone needs a graphic environment, and I refuse to install X just for better keyboard shortcuts on my terminal.
Since dvtm also works as a terminal emulator, it seems to me that you could use loadkeys to setup various keycodes to send the proper vt100 escape codes. I've not tried it, but see no reason why it shouldn't work.
If you just want "No X install" you can use a frame buffer terminal (fbterm was one I used to use, but it doesn't appear to have been updated in a while, perhaps there is a spiritual successor, or maybe it already does what you want)
[edit] YAFT https://github.com/uobikiemukot/yaft looks like it's more up to date than fbterm.
Nevertheless, there is one piece in this puzzle I am missing. There does not seem to be a process which manages the 'core software set' across platforms. So after decades we finally have a shell which is available on most operating systems, but how long will it take before Microsoft, Apple, Oracle, etc. will adopt a new shell?
So why don't the large OS corporations form a consortium to define something like a 'cross platform run time environment' standard (maybe together with the Linux Foundation and some BSD guys?). I mean its not so much about which shell someone prefers, but more about a common set of interpreters and maybe tool kits. And even more than that it is not about the state but the process/progress.
What do you think, do we need such a process or is there another way to solve the cross platform dilemma?
I thought more about a higher level standard like adding Python, Lua or Qt to every installation by default. As some of those things are pretty heavy I doubt that it would be a wise choice to include them in POSIX.
Just imagine a world were you could simply write a small python script which would start a complete GUI application on different platforms without any additional installation procedures. To my knowledge that is not possible today. AFAIK the only way today is to bundle the dependencies, but that has a lot of negative effects.
Probably the first cut will be an "app bundle" format for Oil + busybox + arbitrary user utilities.
I'm more interested in the subset of busybox that is PORTABLE. busybox doesn't run on BSDs, because a lot of it is tied to the Linux kernel. It's more for embedded Linux.
I actually worked on the toybox project around when starting Oil (toybox is the "busybox" on Android, started by the former busybox maintainer Rob Landley.)
So I don't want to necessarily create another package manager, which is sort of implied by your question (?). For shell, the package manager is traditionally the system one -- "apt-get" on Debian, maybe homebrew on Mac, etc.
But I definitely want to solve the dependency problem, and I think the best way to do that is through some kind of support for app bundles. Of course you can also create a container image if you like.
The latter problem could probably be solved with a wrapper which would pipe and execute the shell (or a bytecode interpreter ala shuttle?) automatically - but I've seen no alternative shell project take this part seriously for the problem space.
erm, there's a big difference between a command scripting language and a programming language. These should be treated as different things.
I have years of experience using both, and I really don't want to be doing shell tasks in a programming language and I don't want to write programs in a shell language. Those sorts of hybrids are almost always mediocre. Horses for courses and all that.
There's a reason bash keeps being used - it's mature, it's simple, it's easy and people are productive with it.
Lets say there are two uses of a shell: 1. interactive and 2. non-interactive (scripting).
Lets imagine the commandline user is learning about her OS. She learns it is heavily reliant on shell scripts to build and (if desired) to automate starting services.
She realises that to understand the OS she will have to learn the shell that the OS developers used for scripting.
Then she realises that if she chooses another shell for interactive use, she will have to learn two shells.
Finally she realises that any script she writes in the "non-interactive/scripting" shell will also run under the interactive one. But not vice versa.
If she only has enough time in life to master one shell, which one should she choose?
Over time I found I really cared more about the scripting aspect of a shell than the interactive facet.
The scripting shell used by the OS authors might be an Almquist derived shell, for instance.
Occasionally the ash Im using gets a new "feature" but not too often. I like that it stays relatively small. The latest "feature" is LINENO.
But I also use a smaller version of this shell with no command line history, no tabcomplete, etc. IMO, there is no better way to learn how to reduce keystrokes. It has led to some creativity in this regard for which I am thankful.
After "mastering" ash, I started using execlineb, pipeline and fdmove. I am starting to use more components of execline and am continually replacing ash scripts with execline scripts for more and more daily work.
I guess we will never see execline on the front page, which I think would be interesting because I would like to hear whatever harsh critique HN can muster.
Seeking a better non-interactive/scripting experience, I have experimented with many other shells over the years, and written simple execve "program launchers", but in this vein, I have not found anything that compares to execline.
The speed gains and resource conservation are obvious, but with the ability to do "Bernstein-chaining" and the option to use djb low-level functions instead of libc, it is a rare type of project.
The speed and cleanliness of the compilation process is, compared to all the other crud one routinely encounters in open source projects, "a thing of beauty". Humble opinion only, but I think others might agree.
* https://news.ycombinator.com/item?id=12600807
Laurent Bercot no longer has xyr page about the compilation process. I have since picked up some of the slack there. Although I don't go into things like the way that M. Bernstein avoided autotools.
b = [ @a ]
which pretty much looks like Perl with added line noise. Why are the [] even necessary when it's clear @a is an array?As for new language, I feel like if you want to script things, you can use ruby or python, hell, perl will do and you could be fine. I don't want to be unfair to this effort, I just feel that it is not for me and I am tinkerer.
* http://perldoc.perl.org/functions/open.html
* http://perldoc.perl.org/IPC/Open2.html
* http://perldoc.perl.org/IPC/Open3.html
* http://search.cpan.org/~odc/IPC-Open2-Simple-0.01/lib/IPC/Op...
* http://search.cpan.org/~exodist/Child-0.013/lib/Child.pm
* http://search.cpan.org/~rkrimen/IPC-RunSession-Simple-0.002/...
* http://search.cpan.org/~trski/Proc-Forkmap-0.025/lib/Proc/Fo...
* http://search.cpan.org/~toddr/IPC-Run-0.96/lib/IPC/Run.pm
* http://search.cpan.org/~ayoung/IPC-Run3-Simple-0.011/lib/IPC...
* http://search.cpan.org/~rjbs/IPC-Run3-0.048/lib/IPC/Run3.pm
* http://search.cpan.org/~djerius/IPC-PrettyPipe-0.03/lib/IPC/...
* http://search.cpan.org/~xan/IPC-Pipeline-1.0/lib/IPC/Pipelin...
* http://search.cpan.org/~sscaffidi/IPC-OpenAny-0.005/lib/IPC/...
* http://search.cpan.org/~glai/IPC-Exe-2.002001/lib/IPC/Exe.pm
* http://search.cpan.org/~zefram/IPC-Filter-0.005/lib/IPC/Filt...
f() {
echo --
ls /
echo --
}
f > out.txt
f | wc -l #!/usr/bin/perl
use strict;
sub f
{
my $outputFH=shift;
print $outputFH "--\n";
open(my $lsFH,"ls /|") or die("pipe ls: $?");
print $outputFH (<$lsFH>);
close($lsFH);
print $outputFH "--\n";
}
open(my $outTxtFH,">","out.txt") or die("open: out.txt:$?");
f($outTxtFH);
close($outTxtFH);
open(my $wcFH,"|wc -l") or die("pipe wc: $?");
f($wcFH);
close($wcFH);There are a number of ways to do these same things. Some of them mirror your code more closely than others. Here's my first shot using a core module, since someone already did one with no modules that works much like your code.
use IPC::Run3;
my @lines;
sub f {
my @command = qw( ls / );
run3 \@command, \undef, \@lines;
}
f();
open my $out, '>','out.txt' or warn "can't write to out.txt : $!\n";
printf $out "--\n%s--\n", (join '', @lines);
print scalar @lines . "\n";
Now I'd make that a bit cleaner and more reusable of course. I'd probably take the commands to run from the command line or a configuration file. I'd probably return an array or use a reference to one rather than making a file-level lexical array and just using that from a subroutine. #!/usr/bin/perl -w
use strict;
use English;
use autodie;
$OFS=$ORS="\n";
sub f { my $h ; opendir($h,$_[0]) ; print "--",(readdir($h)),"--"; closedir($h);}
my $out;
open($out,">/tmp/out.txt") ; select $out ; f("/tmp");close($out);
open($out,"| wc -l ") ; select $out ; f("/tmp"); close($out);
select STDOUT;
Would I use perl/python to write this kind of stuff? 'course not. Why would I go through the opendir rigmarole, if all I really need is 'ls'. But there are zillions of (non application) tasks where bash's syntax gets very quickly unwieldy (think filenames with blanks, quoting quotes, composing pipes programmatically, having several filehandles open at once...) while perl shines. And you can still throw the occasional@ary=split("\n",`ls`);
around if you feel so inclined.
#!/usr/bin/perl -w
use strict;
use English;
use autodie;
sub f { open(my $h,"/bin/ls $_[0]|") ; print "--\n",(<$h>),"--\n";}
open(my $o,">/tmp/out.txt") ; select $o ; f("/tmp") ;
open($o,"| wc -l ") ; select $o ; f("/tmp") ; sub output_of {
my(@commands) = @_;
my $pid = open my $fh, "-|" // die "$0: fork: $!";
return $fh if $pid;
for (@commands) {
my $grandchild = open my $gfh, "-|" // die "$0: fork: $!";
if ($grandchild) {
print while <$gfh>;
close $gfh or warn "$0: close: $!";
}
else {
exec @$_ or die "$0: exec @$_: $!";
}
}
exit 0; # child
}
Call it as with my $fh = output_of [qw( echo -- )],
[qw( ls / )],
[qw( echo -- )];
while (<$fh>) {
print "got: $_";
}
close $fh or warn "$0: close: $!";
If implicitly using the shell is acceptable, but we want to interpose some processing, that will resemble my $output = `echo -- ; ls / ; echo --` // die "$0: command failed";
chomp $output;
print "$0: lines = ", `echo '$output' | wc -l`;
This becomes problematic if the output from earlier commands collides with the shell’s quoting rules. This lack of “manipulexity” that we quickly bump into with shell scripts — that are otherwise great on the “whipuptitude” axis — was a common frustration before Perl. The gap between C and the shell is exactly the niche on POSIX systems that Perl occupies and was its initial motivation.If all you want to do is redirect anyway, run
system("{ echo -- ; ls / ; echo -- ; } > out.txt") == 0
or die "$0: command failed";
Use the appropriate tool for the job. Perl was not designed to replace the shell but to build upon it. The shell is great for small programs with linear control flow. It’s hard to beat the shell for do-this-then-this processing. The real world likes to get more complex and nuanced and inconsistent, however.Maybe I am missing your point entirely. Do you have a more concrete example in mind?
...people frequently ask this? Tip #21 of "The Pragmatic Programmer" states: "Use the Power of Command Shells."
In a general purpose programming language there's a lot of overhead for doing the same things.
For maintainability, there are now linters for shell languages that can help making the job easier.
Obligatory in case anyone hasn't seen it:
Works as a web app or local tool.
https://www.gnu.org/software/bash/manual/html_node/The-Restr...
I believe Oil will be able to do this, because the architecture is very modular. See the last point in the post using the LLVM / GCC analogy.
(This type of feature isn't a priority now, but I'm interested in hearing use cases.)
https://www.reddit.com/r/ProgrammingLanguages/comments/7qn14...
Bizarrely (to me), more than one person thought the name was a play on the company "Shell Oil". Is that the connotation you got from it?
That's unfortunate, but I think as people use it more, the name will take on a different connotation. Guido was fighting "Python == snake" for a long time too (it comes from Monty Python). There were a lot of people that said the name Python was stupid and you couldn't convince your boss to use a language with a name like that.
I jest.
1) I prototyped it in Python; the dependency on the Python interpreter will be removed [1]
2) Oil went through many implementation languages, and one incarnation was 2000-3000 lines of C++. But I realized I would NEVER finish that way. The goal is to be compatible with bash, which is a tall order.
3) Oil is heavily metaprogrammed. It's only 16K lines of Python, compared to 160K lines of bash, and it can run some of the most complex bash programs out there. [2]
It's more accurate to say Oil is written in Python + ASDL [3], i.e. somewhat in the style of ML.
[1] https://news.ycombinator.com/item?id=16277358
If you expect the shell process to need to make use of true thread-based concurrency, then that might be a reason not to use python.
Do we have either of the above expectations? What other reasons are there for python to be inappropriate?
I said that you can't convince people not to use bash or PHP by writing posts on the Internet, which is true.
I also said that Facebook is replacing PHP, which is true. That's not a criticism of PHP. The fact that huge companies like Yahoo and Facebook can be started with PHP is amazing.
I think PHP is a good analogy for bash. It gets the core things right, and it gets a ton of work done. I like languages you can get work done in! That's why I use bash.
But both languages also evolved a lot of warts. That's inevitable when you have so many users. They have diverse needs, and you need to preserve backward compatibility, which leads to an awkward evolution.
The reasons for that are that shells must start very quickly (due to subshells, local ssh, etc.), be fast, have no complex dependencies since they are used to recover broken systems, be portable but also with full support for OS semantics and be written in a language that allows rapid development of robust software, none of which Python does well.