My basic thesis is that Shell as a programming language---with it's dynamic scope, focus on line-oriented text, and pipelines---is simply a different programming paradigm than languages like Perl, Python, whatever.
Obviously, if your mental model is BASIC and you try to write Python, then you encounter lots of friction and it's easy for the latter to feel hacky, bad and ugly. To enjoy and program Python well, it's probably best to shift your mental model. The same goes for Shell.
What is the Shell paradigm? I would argue that it's line-oriented pipelines. There is a ton to unpack in that, but a huge example where I see friction is overuse of variables in scripts. Trying to stuff data inside variables, with shell's paucity of data types is a recipe for irritation. However, if you instead organize all your data in a format that's sympathetic to line-oriented processing on stdin-stdout, then shell will work with you instead of against.
/2cents
What Python is to Java, Shell is to Python. It speeds you up several times. I started using inline 'python -c' more often than the python repl now as it stores the command in shell history and it is then one fzf search away.
While neither Shell or SQL are perfect, there have been many ideas to improve them and for sure people can't wait for something new like oil shell to get production ready, getting the shell quoting hell right, or somebody fixing up SQL, bringing old ideas from Datalog and QUEL into it, fixing the goddamn NULL joins, etc.
But honestly, nothing else even comes close to this 10x productivity increase over the next best alternative. No, Thank you, I will not rewrite my 10 lines of sh into python to explode it into 50 lines of shuffling clunky objects around. I'll instead go and reread that man page how to write an if expression in bash again.
It is "opinion" based on debugging scripts made by people (which might be "you but few years ago") that don't know the full extent of death-traps that are put in the language. Or really writing anything more complex.
About only strong side of shell as a language is a pipe character. Everything else is less convenient at best, actively dangerous at worst.
Sure, "how to write something in a limited language" might be fun mental excercise but as someone sitting in ops space for good part of 15 years, it's just a burden.
Hell, I'd rather debug Perl script than Bash one...
Yeah, if it is few pipes and some minor post processing I'd use it too (pipe is the easiest way to do it out of all languages I've seen) but that's about it.
It is nice to write one-liners in cmdline but characteristic that make it nice there make it worse programming language. A bit like Perl in that matter
Not even that is necessary. Just use structured data formats like json. If you are consuming some API that is not json but still structured, use `rq` to convert it to json. Then use `jq` to slice and dice through the data.
dmenu + fzf + jq + curl is my bread and butter in shell scripts.
However, I still haven't managed to find a way to do a bunch of tasks concurrently. No, xargs and parallel don't cut it. Just give me an opinionated way to do this that is easily inspectable, loggable and debuggable. Currenly I hack together functions in a `((job_i++ < max_jobs)) || wait -n` spaghetti.
I swear, there should be a HN rule against those. It pollutes every single Shell discussions, bringing nothing to them and making it hard for others do discuss the real topic.
Which python can do realitively well, by using the `subprocess` module.
Here is an example including a https://porkmail.org/era/unix/award (useless use of cat) finding all title lines in README.md and uppercasing them with `tr`
import subprocess as sp
cat = sp.Popen(
["cat", "README.md"],
stdout=sp.PIPE,
)
grep = sp.Popen(
["grep", "#"],
stdin=cat.stdout,
stdout=sp.PIPE,
)
tr = sp.Popen(
["tr", "[:lower:]", "[:upper:]"],
stdin=grep.stdout,
stderr=sp.PIPE,
stdout=sp.PIPE,
)
out, err = tr.communicate()
print(out.decode("utf-8"), err.decode("utf-8"))
Is this more complicated than doing it in bash? Certainly. But on the other side of that coin its alot easier in python to do a complex regular expression (maybe depending on a command line argument) on one of those, using the result in an HTTP request via the `requests` module, packing the results into a digram rendered in PNG and sending it via email.Yes, that is a convoluted example, but it illustrates the point I am trying to make. Everything outlined could probably done in a bash script, but I am pretty certain it would be much harder, and much more difficult to maintain, than doing this in python.
Bash is absolutely fine up to a point. And with enough effort, bash can do extremely complex things. But as soon as things get more complex than standard unix tools, I rather give up on the comfort of having specialiced syntax for pipes and filehandles, and write a few more lines handling those, if that means that I can do the more complex stuff easily using the rich module ecosystem of Python.
Bourne Shell Scripting is literally a bunch of weird backwards compatible hacks around the first command line prompt from 1970. The intent was to preserve the experience of a human at a command prompt, and add extra functionality for automation.
It's basically a high-powered user interface. It emphasizes what the operator wants for productivity, instead of the designer in her CS ivory tower of perfection. You can be insanely productive on a single line, or paste that line into a file for repeatability. So many programmers fail to grasp that programming adds considerations that the power user doesn't care about. The Shell abstracts away all that unnecessary stuff and just lets you get simple things done quickly.
- no standard unit testing
- how do you debug except with printlns? Fail.
- each line usually takes a minimum of 10 minutes to debug unless you've done bash scripting for... ten years
- basic constructs like the arg array are broken once you have special chars and spaces and want to pass those args to other commands. and UNICODE? Ha.
- standard library is nil, you're dependent on a hodgepodge of possibly installed programs
- there is no dependency resolution or auto-install of those programs or libraries or shell scripts. since it is so dependent on binary programs, that's a good thing, but also sucks for bash programmers
- horrid rules on type conversions, horrid syntax, space-significant rules
- as TFA shows, basic error checking and other conventions is horrid, yeah I want a crap 20 line header for everything
- effective bash is a bag of tricks. Bag of tricks programming is shit. You need to do ANYTHING in it for parsing, etc? Copy paste in functions is basically the solution.
- I'm not going to say interpreter errors are worse than C++ errors, but it's certainly not anything good.
Honestly since even effing JAVA added a hashbang ability, I no longer need bash.
Go ahead, write some bash autocompletion scripts in bash. Lord is that awful. Try writing something with a complex options / argument interface and detect/parse errors in the command line. Awful.
Bash is basically software engineering from the 1970s, oh yeah, except take away the word "engineering". Because the language is actively opposed to anything that "engineering" would entail.
And sure sure you can call any process from a language but the assumptions are different. No one wants to call a Java jar that has a dependency on the jq CLI app being available.
Handling binary data can also work in Bash, provided that you just use it as a glue for pipelines between other programs (e.g. feeding video data into ffmpeg).
One time, while working on some computer vision project, I had a need to hack up a video-capture-and-upload program for gathering training data during a certain time of day. It took me about 20 minutes and 50 lines of Bash to setup the whole thing, test it, and be sure it works.
It's also important to learn your system's environment too. This is your "standard library", and it's why POSIX compatibility is important. You will feel shell is limited if you don't learn how to use the system utilities with shell (or if your target system has common utilities missing).
As an example of flexibility, you can use shell and system utilities in combination with CGI and a basic web server to send and receive text messages on an Android phone with termux. Similar to a KDE Connect or Apple's iMessage.
You think the complaints about rickety, unintuitive syntax are "socially founded"? I can't think of another language that has so many pointless syntax issues every time I revisit it. I haven't seen a line of Scheme in over a decade, and I'm still fairly sure I could write a simple if condition with less likelihood of getting it wrong than Bash.
I came at it from the other end, writing complex shell scripts for years because of the intuition that python would be overkill. But there was a moment when I realized how irrational this was: shell languages are enough of a garbage fire that Python was trivially the better choice for my scripts the minute flow control enters the picture.
Bash has dynamic scope with its local variables.
The standard POSIX language has only global variables: one pervasive scope.
* I like to the HGPPL data structures and convenient library for manipulating them (in my case this is Clojure which has a great core library). Bash has indexed and associative arrays.
* Libraries for common data formats are also used in a consistent way in the HGPPL. I don't have to remember a DSL for every data format - i.e. how to use jq when dealing with JSON. Similarly for YAML, XML, CSVs, I can also do templating for configuration files for nginx and so on. I've seen way too many naive attempts to piece together valid YAML from strings in bash to know its just not worth doing.
* I don't want to switch programming language from the main application and I find helps "break down silos" when everyone can read and contribute to some code. If a team is just sysadmins - sure, make bash the official language and stick to it.
* I can write scripts without repeating myself using namespaces and higher-order functions, which my choice of paradigm for abstractions, others write cleanly with classes. You can follow best practices, avoid the use of ENV vars, but that requires extra discipline and it is hard to enforce on other for the type of places where bash is used.
This argument is essentially the same as "dynamic typing is just a different programming paradigm than static typing, and not intrinsically better or worse" - but to an even greater extent, because bash isn't really typed at all.
To those who think that static (and optional/gradual) typing brings strong benefits with little downsides over dynamic typing and becomes increasingly important as the size of a program increases, bash is simply unacceptable for any non-trivial program.
Other people (like yourself) that think that static typing isn't that important and "it's just a matter of preference" will be fine with an untyped language like bash.
Unfortunately, it's really hard to find concrete, clear evidence that one typing paradigm is better than the other, so we can't really make a good argument for one or the other using science.
However, I can say that you're conflating different traits of shell languages here. You say "dynamic scope, focus on line-oriented text, and pipelines" - but each of those are very different, and you're missing the most contested one (typing). Shell's untypedness is probably the biggest complaint about it, and the line-oriented text paradigm is really contentious, but most people don't care very much about the scoping, and lots of people like the pipelines feature.
A shell language that was statically-typed, with clear scoping rules, non-cryptic syntax, structured data, and pipelines would likely be popular and relatively non-controversial.
So if you don't care about error cases everything is fine, but if you do, it gets ugly really fast. And that is the reason why other languages are probably be better suited if you want to write something bigger that 10 lines.
However, I have to admit, I don't follow that advice myself...
There is something to be said in favor of the shell being always available, but Perl is almost always available. FreeBSD does not have it base of the base system, but OpenBSD does, and most Linux distros do, too.
But it is fun to connect a couple of simple commands via pipes and create something surprisingly complex. I don't do it all the time, but it happens.
However the biggest issues I've had is that the code is really hard to test, error handling in shell isn't robust, and reusability with library type methods is not easy to organize or debug.
Those are deal breakers for me when it comes to building any kind of non trivial system.
Clearly, it's for a different purpose, and there are some things that wouldn't work in a general-purpose language that isn't as focused on line-based string processing, but we are really happy with the things we took from bash.
Shell quoting though, Aieeee...
I find I have to shift gears quite substantially moving from shell or powershell to anything else...
"I'll just pipe the output of this function into.. oh, right"
I think fish is quite a bit different in terms of syntax and semantics (I'm not very familiar with it), but zsh is essentially the same as bash except without most of the needless footguns and awkwardness. zsh also has many more advanced features, which you don't need to use (and many people are unaware of them anyway), but will very quickly become useful; in bash all sorts of things require obscure incantations and/or shell pipelines that almost make APL seem obvious in comparison.
In my experience few people understand bash (or POSIX sh) in the first place, partly because everything is so difficult and full of caveats. Half my professional shell scripting experience on the job is fixing other people's scripts. So might as well use something that doesn't accidentally introduce bugs every other line.
Most – though obviously far from all – scripts tend to be run in environments you control; portability is often overrated and not all that important (except when it is of course). Once upon a time I insisted on POSIX sh, and then I realised that actually, >90% of the scripts I wrote were run just by me or run only in an environment otherwise under my control, and that it made no sense. I still use POSIX sh for some public things I write, when it makes sense, but that's fairly rare.
I think bash is really standing in the way of progress, whether that progress is in the form of fish, zsh, oil shell, or something else, because so many people conflate "shell" with "bash", similar to how people conflate "Google" with "search" or "git" with "GitHub" (to some degree).
Of course Bash is ubiquitous so I use them whenever I can in the company. A golden rule for me is: if it has more than 50 lines then I should probably write in a decent programming language (e.g. Ruby). It makes maintenance so much easier.
Ten years or so ago the cool kids were using zsh: which is in general a pretty reasonable move, it’s got way more command-line amenities than bash (at least built in).
Today fish is the fucking business: fish is so much more fun as a CLI freak.
But I guess I’ve got enough PTSD around when k8s or it’s proprietary equivalents get stuck that I always wanted to be not only functional but fast in outage-type scenarios that I kept bash as a daily driver.
Writing shell scripts of any kind is godawful, the equivalent python is the code you want to own, but it’s universality is a real selling point, like why I keep half an eye on Perl5 even though I loathe it: it may suck but it’s always there when the klaxon is going off.
The best possible software is useless if it’s not installed.
There is a general problem in the fact that a radical evolution of glue languages wouldn't be popular because devs rather use Python, and small evolutions wouldn't be popular (ie. zsh), because they end up being confusing (since they're still close to Bash) and not bringing significant advantages.
I'm curious why there haven't been attempts to write a modern glue language (mind that languages like Python don't fit this class). I guess that Powershell (which I don't know, though) has been the only attempt.
If you're at that spot, don't use shell in the first place but whatever other scripting language your team uses. Well, unless it's "pipe this to that to that", sh has no parallel here
Same PS1, aliases, functions, etc. but with a couple of slight variations due to syntax differences
I don't agree with this one. When I name my script without extension (btw, .sh is fine, .bash is ugly) I want my script to look just like any other command: as a user I do not care what the language program is written in, I care about its output and what it does.
When I develop a script, I get the correct syntax highlight becuase of the shebang so the extension doesn't matter.
The rest of the post is great.
"Ugly" is subjective. If I encountered a file with that extension, I'd assume it uses Bash-specific features and that I shouldn't run this script with another shell.
Only if it doesn't matter that the script fails non-gracefully. Some scripts are better to either have explicit error handling code, or simply never fail. In particular, scripts you source into your shell should not use set options to change the shell's default behavior.
"Prefer to use set -o nounset."
ALWAYS use this option. You can test for a variable that might not be set with "${FOO:-}". There is no real downside.
"Use set -o pipefail."
Waste of time. You will spend so much time debugging your app from random pipe failures that actually didn't matter. Dont use this option; just check the output of the pipe for sane values.
"Use [[ ]] for conditions"
No!!! Only use that for bashisms where there's no POSIX alternative and try to avoid them wherever possible. YAGNI!
"Use cd "$(dirname "$0")""
Use either "$(dirname "${BASH_SOURCE[0]}")" or grab a POSIX readfile-f implementation.
"Use shellcheck."
This should have been Best Practice #1. You will learn more about scripting from shellcheck than 10 years worth of blog posts. Always use shellcheck. Always.
Also, don't use set -o nounset when set -u will do. Always avoid doing something "fancy" with a Bashism if there's a simpler POSIX way. The whole point of scripts is for them to be dead simple.
For most people, YAGNI means using convenient Bash-isms, because their scripts won't ever be run on environments that don't have Bash.
Edit: Admittedly, someone in this thread pointed out the flaw in my argument, there are plenty of cases where you can't assume you have Bash. I still hold that proofing something for all possible environments is itself a YAGNI.
This seems like really bad advice because the number of people writing bash massively massively outnumbers the people writing sh. Regex matching, glob matching, proper parsing, &&/||, no need to quote.
I would say the opposite, enjoy all the bashisms like (( )) [[ ]], numeric for loops, extended globs, brace expansion, ranges, OH GOD YES VARIABLE REFERENCES, and only rewrite when you absolutely have to make it work on sh.
Silently ignoring sub-commands that exit with a non-zero code is not the same thing as "never failing". Your script might return 0, but that doesn't mean it did what you expect it to.
> Also, don't use set -o nounset when set -u will do.
`set -o nounset` is a lot easier to understand for the next person to read the script. Yes, you can always open the manpage if you don't remember, but that is certainly less convenient than having the option explained for you.
What shell are you using that doesn't support `set -o nounset`? Even my router (using OpenWRT+ash) understands the long-form version.
> Only use that for bashisms where there's no POSIX alternative
I totally disagree. You expect people to know the difference between `[[ ... ]]` and `[ ... ]` well enough to know what the bash version is required? You expect the next person to edit the script will know that if they change the condition, then they might need to switch from `[` to `[[`?
How do you even expect people to test which of the two that they need? On most systems, `/bin/sh` is a link to `/bin/bash`, and the sh-compatibility mode of bash is hardly perfect. It's not necessarily going to catch a test that will fail in `ash` or `dash`.
I think the "YAGNI" applies to trying to support some hypothetical non-bash shell that more than 99% of scripts will never be run with. Just set your shebang to `#!/bin/bash` and be done with it.
I totally agree about `pipefail`, though. I got burned by this with condition like below: ``` if (foo | grep -Eq '...'); then ```
Since `-q` causes grep to exit after the first match, the first command exited with an error code since the `stdout` pipe was broken.
To that end, would it not make more sense to always use `[[ ... ]]` for conditions, when I know my .bash scripts will always be invoked by bash?
Consistency is simple.
> Only if it doesn't matter that the script fails non-gracefully. Some scripts are better to either have explicit error handling code, or simply never fail.
Then handle those errors explicitly. The above will catch those error that you did not think about.
Oh how I hate the double square bracket. It is the source of many head scratching bugs and time wasted. "The script works in my machine!" It doesn't work in production where we only have sh. It won't exit due to an error, the if statement will gobble the error. You only find the bug after enough bug reports hit that particular condition.
After a couple shots to the foot I avoid double square brackets at all cost.
Pass all scripts through https://www.shellcheck.net/ or use `shellcheck` on the commandline.
Learn the things it tells you and implement them in future scripts.
I'm almost tempted to put in a self-linting line in scripts so that they won't run unless shellcheck passes completely. (It would be unnecessary to lint the same script every time it's called though, so it's not a serious suggestion).
There should be an option in bash to auto-lint scripts the first time that they're called, but I don't know how the OS should keep track of when the script was last changed and last linted.
He suggests to `set -eu`, which is a good idea, but then immediately does this:
if [[ "$1" =~ ^-*h(elp)?$ ]]; ...
If the script is given no arguments, this will exit with an unbound variable error. Instead, you want something like this: if [[ "${1-}" =~ ^-*h(elp)?$ ]]; thenUnfortunately, `errexit` is fairly subtle. For example
[ "${some_var-}" ] && do_something
is a standard way to `do_something` only when `some_var` is empty. With `errexit`, naively, this should fail, since `false && anything` is always false. However, `errexit` in later versions of Bash (and dash?) ignore this case, since the idiom is nice.However! If that's the last line of a function, then the function's return code will inherit the exit code of that line, meaning that
f(){ [ "${some_var-}" ] && do_something;}; f
will actually trigger `errexit` when `some_var` is empty, despite the code being functionally equivalent to the above, non-wrapped call.Anyway, there are a few subtleties like this that are worth being aware of. This is a good, but dated, reference: https://mywiki.wooledge.org/BashFAQ/105
I'm a fan of using BASH3 boilerplate: https://bash3boilerplate.sh/
It's standalone, so you just start a script using it as a template and delete bits that you don't want. To my mind, the best feature is having consistent logging functions, so you're encouraged to put in lots of debug commands to output variable contents and when you change LOG_LEVEL, all the extraneous info doesn't get shown so there's no need to remove debug statements at all.
The other advantage is the option parsing, although I don't like the way that options have to have a short option (e.g. -a) - I'd prefer to just use long options.
> And it’s usually always appropriate.
I wouldn't think so. You don't know where your script will be called from, and many times the parameters to the script are file paths, which are relative to the caller's path. So you usually don't want to do it.
I collected many tips&tricks from my experience with shell scripts that you may also find useful: https://raimonster.com/scripting-field-guide/
Edit: just had a quick look at your recommended link and spotted a "mistake" in 4.7 - using "read" without "-r" would get caught out by shellcheck.
https://stackoverflow.blog/2022/07/06/why-perl-is-still-rele...
https://stackoverflow.blog/2022/09/08/this-is-not-your-grand...
FOO_ARGS=(
# Some explanatory comment
--my-arg 'some value'
# More comments
some other args
#...
)
myCondition && FOO_ARGS+=(some conditional args)
foo "${FOO_ARGS[@]}"If a shell script needs any kind of functionality beyond POSIX, then that's a good time to upgrade to a higher-structure programming language.
Here's my related list of shell script tactics:
For me, the additional features that bash provides are much more important than a portability that I'll never need to use.
Bash in --posix mode does that perfectly.
Agreed on the powerful bit, however [[ ]] is not a "builtin" (whereas [ and test are builtins in bash), it's a reserved word which is more similar to if and while.
That why [[ ]] can break some rules that builtins cannot, such as `[[ 1 = 1 && 2 = 2 ]]` (vs `[ 1 = 1 ] && [ 2 = 2]` or `[ 1 = 1 -a 2 = 2 ]`, -a being deprecated).
Builtins should be considered as common commands (like ls or xargs) since they cannot bypass some fundamental shell parsing rules (assignment builtins being an exception), the main advantages of being a builtin being speed (no fork needed) and access to the current shell process env (e.g. read being able to assign a variable in the current process).
For example, using cd "$(dirname "$0")" to get the scripts location is not reliable, you could use a more sophisticated option such as: $(dirname $BASH_SOURCE)
When it comes to bash search for even simplest command/syntax always ALWAYS leads to stackoverflow thread with 50 answers where bash wizards pull oneliners from sleeves and nitpick and argue about various intricancies
I'm more likely to use the BashFAQ though for actual snippets: https://mywiki.wooledge.org/BashFAQ
I start scripts from the very useful template at https://bash3boilerplate.sh/
Also, did you mean to write...?
$(dirname "$BASH_SOURCE")"Luckily, most commonly encountered scripting issues are with whitespace in filenames/variables and running a script through shellcheck will catch most (all?) of those problems.
It's amazing how edge cases can make a simple command such as 'echo' break. (Top tip - use printf instead of echo)
> People can now enable debug mode, by running your script as TRACE=1 ./script.sh instead of ./script.sh.
The above "if" condition will set xtrace even when user explicitly disables trace by setting TRACE=0.
A correct way of doing this would be:
if [[ "${TRACE-0}" == "1" ]]; then set -o xtrace; fiBut, more importantly, people will google for how to set cwd to the script directory more often then will google how to go to an absolute path. Having 'cd "$(dirname "$0")"' as reference in an article discussing best practices and the topic of changing the directory early, is a good idea.
My girlfriend complained about Firefox aalllways needing updates every time she starts it. Yeah, because she used Chrome most of the time, if you start Firefox once every other month, of course that's going to happen every time. This sounds like a similar issue: the software may not be the friendliest, but you can't really expect another outcome if you never use it because you don't like it because you never use it.
while getopts :hvr:e: opt
do
case $opt in
v)
verbose=true
;;
e)
option_e="$OPTARG"
;;
r)
option_r="$option_r $OPTARG"
;;
h)
usage
exit 1
;;
\*)
echo "Invalid option: -$OPTARG" >&2
usage # call some echos to display docs or something...
exit 2
;;
esac
doneFor someone who knows errexit can't be trusted, and codes defensively anyway, it's fine.
* safe ways to do things in bash: https://github.com/anordal/shellharden/blob/master/how_to_do...
* better scripting: https://robertmuth.blogspot.in/2012/08/better-bash-scripting...
* robust scripting: https://www.davidpashley.com/articles/writing-robust-shell-s...
Now when I'm processing files with BASH, I nearly always end up copying stuff from there as it just bypasses common errors such as not handling whitespace or filenames that contain line breaks.
The order of commandline args shouldn't matter.
Env vars are better at passing key/value inputs than commandline arguments are.
Process-substitution can often be used to avoid intermediate files, e.g. `diff some-file <(some command)` rather than `some command > temp; diff some-file temp`
If you're making intermediate files, make a temp dir and `cd` into that
- Delete temp dirs using an exit trap (more reliable than e.g. putting it at the end of the script)
- It may be useful to copy `$PWD` into a variable before changing directory
Be aware of subshells and scope. For example, if we pipe into a loop, the loop is running in a sub-shell, and hence its state will be discarded afterwards:
LINE_COUNT=0
some command | while read -r X
do
# This runs in a sub-shell; it inherits the initial LINE_COUNT from the parent,
# but any mutations are limited to the sub-shell, will be discarded
(( LINE_COUNT++ ))
done
echo "$LINE_COUNT" # This will echo '0', since the incremented version was discarded
Process-substitution can help with this, e.g. LINE_COUNT=0
while read -r X
do
# This runs in the main shell; its increments will remain afterwards
(( LINE_COUNT++ ))
done < <(some command)
echo "$LINE_COUNT" # This will echo the number of lines outputted by 'some command'$OLDPWD is set when you 'cd'. Also 'cd -' will take you back to the last directory.
Ugh I've seen so many (bash and non bash) cmdline tools that made it utterly annoying like that
The special place in hell goes to people who force users to write
cmd help subcmd
instead of cmd subcmd --help
or ones that do not allow doing say cmd subcmd --verbose
because "verbose is global and doesn't belong to subcmd`or ones where you need to write
cmd --option1 subcmd --option2 subsubcomd --option3
and need to jump all over the cmdline if you want to add some option after previous invocationand if you go "well but the option for command and subcommand might have same name" DONT NAME THEM THE SAME, that's just confusing people and search results.
Why not use pushd/popd instead?
The programmer, who was very proud of his mastery of C, said: “How can this be? C is the language in which the very kernel of Unix is implemented!”
Master Foo replied: “That is so. Nevertheless, there is more Unix-nature in one line of shell script than there is in ten thousand lines of C.”
The programmer grew distressed. “But through the C language we experience the enlightenment of the Patriarch Ritchie! We become as one with the operating system and the machine, reaping matchless performance!”
Master Foo replied: “All that you say is true. But there is still more Unix-nature in one line of shell script than there is in ten thousand lines of C.”
The programmer scoffed at Master Foo and rose to depart. But Master Foo nodded to his student Nubi, who wrote a line of shell script on a nearby whiteboard, and said: “Master programmer, consider this pipeline. Implemented in pure C, would it not span ten thousand lines?”
The programmer muttered through his beard, contemplating what Nubi had written. Finally he agreed that it was so.
“And how many hours would you require to implement and debug that C program?” asked Nubi.
“Many,” admitted the visiting programmer. “But only a fool would spend the time to do that when so many more worthy tasks await him.”
“And who better understands the Unix-nature?” Master Foo asked. “Is it he who writes the ten thousand lines, or he who, perceiving the emptiness of the task, gains merit by not coding?”
Upon hearing this, the programmer was enlightened.
`-h` and `--help` are fine. But `help` and `h` should only display the help if the script has subcommands (like `git`, which has `git commit` as a subcommand). Scripts that don't have subcommands should treat `h` and `help` as regular arguments -- imagine if `cp h h.bak` displayed a help message instead of copying the file named "h"!
I wouldn't encourage `-help` for displaying the help because it conflicts with the syntax for a group of single-letter options (though if `-h` displays the help, there is no legitimate reason for grouping `-h` with other options).
And ideally scripts that support both option and non-option arguments should allow `--` to separate them (e.g. `rm -- --help` removes the file called "--help"). But parsing options is complicated and probably out of scope for this article.
> If appropriate, change to the script’s directory close to the start of the script. And it’s usually always appropriate.
This is very problematic if the script accepts paths as arguments, because the user would (rightly) expect paths to be interpreted relative to the original working directory rather than the script's location. A more robust approach is to compute the script's location and store it in a variable, then explicitly prepend this variable when you want paths to be relative to the script's location.
As soon as output needs to be parsed — especially when it’s being fed back into other parts of the script — it gets harder. Handling errors and exceptions is even more difficult.
Things really fall down on modularity. There are tricks and conventions: for example you can put all functions to do with x in a file called lib/x.sh, prefix them all with x_, and require that all positional parameters must be declared at the top of each function with local names.
At that point though, I would rather move to a language with named parameters, namespaced modules, and exception handling. In Python, it’s really easy to do the shell bits with:
def sh(script):
subprocess.run(
[‘sh’, ‘-c’, script, ‘--‘, *args],
check=True,
)
which will let you pass in arguments with spaces and be able to access them as properly lexed arguments in $1, $2 etc in your script. You can even preprocess the script to be prefixed with all the usual set -exuo pipefail stuff etc.(Disclaimer: I'm one of the authors)
After falling in love with ShellCheck several years ago, with the help of another person, I made the ShellCheck REPL tool for Bash:
https://github.com/HenrikBengtsson/shellcheck-repl
It runs ShellCheck on the commands you type at the Bash prompt as soon as you hit ENTER.I found it to be an excellent way of learning about pit falls and best practices in Bash as you type, because it gives you instant feedback on possible mistakes. It won't execute the command until the ShellCheck issues are fixed, e.g. missing quotes, use of undefined variables, or incorrect array syntax.
It's designed to be flexible, e.g. you can configure ShellCheck rules to be ignored, and you can force executtion by adding two spaces at the end.
License: ISC (similar to MIT). Please help improve it by giving feedback, bug reports, feature requests, PRs, etc.
Traditional shell might be:
grep -q thing < file
if [ $? -eq 0 ] ; then echo "thing is there ; fi
VS just using if to look at the ES of the prior program if
grep -q thing < file
then
echo "thing is there"
fi
"test" and [[ are a fine programs / tools for evaluating strings, looking at file system permissions, doing light math, but it isn't the only way to interact with conditionals.I'm not saying *never* write shell scripts, but always consider doing something else, or at least add a TODO, or issue, to write in a more robust language.
always quote filenames, because you never know if there's a space in them.
filenames with dashes or periods will kill you
prepend current directory file manipulation filenames with "./", because the file might start with a period or dash
Dashes in filenames still might kill you, especially if you pass those to another command
I'd also say that in most cases Python is also a better choice, especially when you use the ! syntax.
exec &> >( ts '[%Y-%m-%d.%H:%M:%S] ' | tee ${LOGFILENAME} )
Case in point: Declaring an array. IMHO, it’s just not ergonomic at all. Especially not in sh/dash.
* use bats for testing * use shfmt for code formatting * use shellcheck for linting
export PS4='+ ${BASH_SOURCE:-}:${FUNCNAME[0]:-}:L${LINENO:-}: '
This will then append the filename, function name, and line number to the command being executed. Can make it much easier to find where exactly something is happening when working with larger bash scripts.[0]: https://marketplace.visualstudio.com/items?itemName=timonwon...
I think this site is amazing, and it must be older than at least two decades.
A better resource is https://mywiki.wooledge.org/BashGuide Also, a preliminary read of https://mywiki.wooledge.org/BashPitfalls is advised.
Using shellcheck as a bash/shell linter is the ultimate. When you get a new warning, you can look up the code and learn why it's complaining.
echo_command()
{
echo
echo '$' "$@"
}
echo_and_run_command()
{
echo_cmd "$@"
"$@"
}
Then something like: main()
{
# For simple commands that do not use | < > etc.
echo_and_run_command cp --verbose ...
# More complex commands
echo_command grep ... '|' find ...
grep ... | find ...
}
main "$@" ### Usage and help - change this for each script
##############################################################################
# shellcheck disable=SC2015
[[ "${__usage+x}" ]] || read -r -d '' __usage <<-'EOF' || true # exits non-zero when EOF encountered
-t --timestamps Enable timestamps in output
-v --verbose Enable verbose mode, print script as it is executed
-d --debug Enables debug mode
-h --help This page
-n --no-color Disable color output
EOF
Then you get to refer to ${arg_t} for the --timestamps option etc. set -x
does this seamlessly without cluttering up your script. You can even run your script with sh -x script
If you didn't always want the logging output.Don’t.
Use a proper programming language instead. bash (and similar scripting languages) are non-portable (Linux/Windows) and the opposite of what I want in a good programming language.
I agree we need a shell scripting language, I disagree that bash zsh or anything that frequently uses double square brackets and awful program names is the epitome of shell scripting language design.
giving up on the notion "others will use or collaborate with my scripts" was the single most productive thing i've done for my scripting.
A couple of days ago this link was posted to hn http://mywiki.wooledge.org/BashFAQ/105
It showed me once again how little bash I know even after all those years. I checked the examples to see if only set -e is dangerous or also set -o like the author suggested and sure enough it's just as bad es set -e. You just got to thoroughly check your bash scripts and do proper error handling.
Adding an extension to make it easier to tell what's inside without opening it is being lazy rather than following best practices. Best practice is half century of leaving them off.
Unlike Windows, which ignores extensions and lets you run a command omitting them, Unix has a better (I'm not saying perfect) approach which allow the metadata to pulled from the first line of the file, tuned exactly to what the script needs. No sane extension is going to capture this info well.
Extensions expose (usually incompletely) the implementation details of what's inside, to the detriment of the humans using them (the OS doesn't care), who will then guess at what the extension means.
However, many extensions are WRONG, or too vague to actually tell what interpreter to call on them - which this subgroup of devs does all the time, mostly commonly using the wrong version of python (wrong major, wrong minor, not from a specific python env) and breaking things. .sh is manifestly wrong as an extension for Bash scripts, which have different syntax.
The exception is scripts that should be "."-ed in (sourced), where having a meaningful .sh or .bash (which are NOT interchangeable) is ACTUALLY good, because it highlights that they are NOT COMMANDS. (and execute isn't enabled)
If you want a script to make it easier to list commands that are shell scripts or whatever, there's a simple one at the end of:
https://www.talisman.org/~erlkonig/documents/commandname-ext...
I've seen several cases of .sh scripts which contained perl code, python, or were actually binary, because the final lynchpin in this (abridged) case against extensions is that in complex systems the extensions often have to be kept even after the implementation is upgraded to avoid breaking callers. It's very normal for a program to start as shell, get upgraded to python, and sometimes again to something compiled. Setting up a situation which would force the extension to be changed in all clients in a large network to keep it accurate is beyond stupid.
Don't use extensions on commands, and stop trying to rationalize it because you (for those to whom this applies) just like to do "ls *.sh" (on your bash scripts). These are a violation of Unix best practices, and cause harm when humans try to interpret them.
shellcheck
It's like pylint for your shellscripts.
- `set -o errtrace`: trap errors inside functions
- `shopt -s inherit_errexit`: subprocesses inherit exit error
Unfortunately the list of Bash pitfalls is neverending, but that's a good start.
And the right-hand-side of a variable assignment.
And the WORD in a case statement. (Not in the patterns, though).
Plus a bunch of other single-token(?) contexts.
I don't recommend relying on the context though, it's clever and makes it hard to verify that the script does not have expansion bugs.
1. end all your lines C-style; this may save your life many times;
2. declare -is variables and -r CONSTANTS at the beginning, again, C-style;
3. print TIMESTAMP="$(date +%Y-%m-%d\ %H:%M:%S)"; where appropriate if your script logs its job;
4. Contrary to OPs reommendation I strongly try to stick to pure SH compatibility in smaller acripts so they can run on routers, TVs, androids and other busybox-like devices; BASH isn't everywhere.
date -u +%Y-%m-%dT%TZ
because the time zone is unambiguous, the command works with POSIX date, and it's valid under both ISO 8601 and RFC 3339. cd "$(dirname "$0")"
part of this? This is changing to the directory of where the script is all cases?EDIT: I should've just tested this to see :) I did and it does exactly that. Very helpful. I didn't realize $0 is always the first argument. Kind of like how `self` is the first implicit argument in OOP methods?
It has some great tools for user interaction, too, including secure string handling for credentials, a TUI framework, easy parallelism, unit tests and lots more.
To be honest these days I use shell scripts, and if they get too large I'll replace with either golang or python. I don't love python, especially when dependencies are required, but it is portable and has a lot of things built-in that mean executing "standard binaries" isnt required so often.
Kudos for nicely put tips that are easy to follow and understand.
I'm newcomer in bash
Usually there is no need to return to original directory. Change of directory is process-local (script-local) so the calling process is not affected by this 'cd' in the script.
For example, instead of
USER=mail
UID=8
use USER=mail
UID=$(id -u $USER)
It improves portability and removes potential sources of errors.Also note that this is something that should be done in any programming language, not just shell scripts.
The explanation for that wasnt really an explanation either ...
I won't check with for version those tips applies, and continue writing POSIX shell as much as can. I might check which or those suggestions are POSIX, though.
Any reason to avoid writing bash scripts, other than purism?
1. use shellcheck
2. use shfmt (to format your shell script)
3. set -euo pipefail (much shorter)
my slight complain about bash is that it disallows space around =, X=100 is OK, X = 100 is not, sometimes I just make mistakes like that.My order preference would be:
1. use shellcheck.
… rest …Perhaps not surprisingly, it's bourne shell, not bash. But still, it's an actual published standard all can refer to when the language in question is "shell scripts", i.e. .sh files, or "shell commands" in some context where a shell command is called for (e.g. portable makefiles).
https://pubs.opengroup.org/onlinepubs/9699919799/utilities/V3_chap02.html* Write help text to stdout, not stderr, so I can grep it
* Set exit status to 0 on success and 1 or some other small positive integer on failure so I can use || and &&
> 9. Always quote variable accesses with double-quotes.
Does the author refer to "$MYVAR"? Why would you want to use that over ${MYVAR}?
That said, I like doing the usage like so for short scripts:
#!/bin/sh
#
# Sleep until a specific time. This takes a time in 24-hour clock format and
# sleeps until the next instance of this time is reached.
#
# % sleep-until 15:30:45
# % sleep-until 15:30 # Until 15:30:00
# % sleep-until 15 # Until 15:00:00
#
# Or to sleep until a specific date:
#
# % sleep-until 2023-01-01T15:00:00
#
# Or space instead of T; can abbreviate time like above.
echo " $@" | grep -q -- ' -h' && { sed '1,2d; /^[^#]/q; s/^# \?//;' "$0" | sed '$d'; exit 0; } # Show docs
That will re-use the comment as the help: % sleep-until -h
Sleep until a specific time. This takes a time in 24-hour clock format and
sleeps until the next instance of this time is reached.
…
It's a bit of a byzantine incarnation, but I just copy it from one script to the next, it saves a bit of plumbing, and generally looks pretty nice IMO.I'm not 100% sure if I thought of this myself or if it's something I once saw somewhere.
This works around malicious filenames that may start with a '-'. Especially important if you're running an 'rm' command
Edit: another workaround is to ensure that files are always absolute pathnames or even starting with './' for relative ones.
But I use, set -euo pipefail. I think -u is -o unset ,etc? Just easier to type.
2. Shell scripts are wonderful, but once they exceed a few lines (give or take 50), they've entered the fast track on becoming a maintenance headache and a liability.
.sh is appropriate for a shell library module which you source from another shell script. It is not really appropriate for something which is more abstract (such as a "program" inside your PATH).
set -e / set -o errexit will only be helpful if you fundamentally understand exactly what it does, if you don't, you are bound to end up with broken code. Once you fundamentally understand set -e you will be better placed to decide whether it is appropriate to use it or more appropriate to simply do proper error handling. The oft repeated mantra of using set -e is really misleading a lot of people into thinking that bash has some sane mode of operation which will reduce their chance of making mistakes, people should never be mislead to think that bash will ever do anything sane.
set -u / set -o nounset breaks a lot of perfectly sensible bash idioms and is generally bad at what proponents of it claim it will help solve (using unset variables by accident or by misspelling). There are better linters which solve this problem much better without having to sacrifice some of what makes bash scripts easier to write/read.
set -o pipefail is not an improvement/detriment, it is simply changing the way that one of bash's features functions. pipefail should only be set around specific uses of pipelines when it is known that it will produce the intended result. For example, take this common idiom:
if foo | grep -q bar; then ...
The above will NOT behave correctly (i.e. evaluate to a non-zero exit code) if grep -q closes its input as soon as it finds a match and foo handles the resulting SIGPIPE by exiting with a non-zero status code.Guarding set -x / set -o xtrace seems unnecessary, -x is already automatically inherited. Just set it before running the program.
Good advice on using [[ but it is important to fundamentally understand the nuances of this, quoting rules change within the context of [[.
Accepting h and help seems incredibly unnecessary. If someone who has never used a unix-like operating system happens upon your script then they may find it useful. But I don't think catering to such a low common denominator makes sense. Your script should just handle invalid arguments by printing a usage statement with maybe a hint of how to get a full help message.
I'd say changing to your script's directory is almost never appropriate.
Shellcheck, while useful, is useful only if you understand bash well.
The lesson here is that if you think that you have spent enough time writing bash to suggest best practices, you've not spent enough time writing bash. Only when you realise that the best practice is to not use bash have you used bash long enough (or short enough).
If you want to write a script which you're going to rely on or distribute, learn bash inside out and then carefully consider if it's still the right option.
If you are unwilling or unable to learn bash inside out then please use something else.
Do not be fooled into thinking that some "best practices" you read online will save you from bash.
1. Bash shouldn't be used, not because of portability, but because its features aren't worth their weight and can be picked up by another command, I recommend (dash) any POSIX complaint shell (bash --posix included) so you aren't tempted to use features of bash and zsh that are pointless, tricky or are there for interactivity. Current POSIX does quite well for what you would use shell for.
2. Never use #!/usr/bin/env bash. Even if you are using full bash, bash should be installed in /usr/bin/bash. If you don't even know something this basic about the environment, then you shouldn't be programming it, the script is likely to create a mess somewhere in the already strange environment of the system.
3. Don't use extensions unless you're writing for Windows machines. Do you add extensions to any other executable? head, sed can help you retrieve the first line of a file and neither of them have extensions.
4, 5, 6. You may do this is obscure scenarios where you absolutely cannot have a script run if there is any unforeseen error, but it's definitely not something that should be put on without careful consideration, http://mywiki.wooledge.org/BashPitfalls#set_-euo_pipefail explains this better. And it goes without saying that this is not a substitute for proper error handling.
7. I agree that people should trace their shell scripts, but this has nothing to with shell.
8. [[]] is powerful, so I very often see it used when the [] builtin would suffice. Also, [[ is a command like done, not a bash builtin.
9. Quote only what needs quoting. If you don't know what needs quoting, then you don't understand your script. I know it seems like a waste of time, but it will make you a much better shell programmer then these always do/don't do X unless Y then do Z, rules that we are spouting.
10. Use either local or global variables in functions, depending on which you want. I see no reason to jump through this weird hoop because it might become an easily fixable problem later.
11. This is a feature, make usage appear when you blink, I don't care, if anything variations of -h too limited,
12. Finally, one "opinion" we agree on, not sure how else to redirect to stderr, but I'm sure that other way isn't as good as this one.
13. No, read the usage. If you want inferior long options, then you can add them to your scripts, but they are not self documenting, they only serve to make commands less readable and clutter completion.
14. No, it's not usually appropriate, do you want all installed scripts writing to /bin? The directory the script is running in should be clearly communicated to the user, with cd "$(dirname "$0")", "It runs in the directory the script is in." Needs to be communicated somewhere, or you have failed.
15. Yes, use ShellCheck.
16. Please call your list Bash Script Practices if it's unrelated to shell.
Credibility gone.