command -v tput &>/dev/null && [ -t 1 ] && [ -z "${NO_COLOR:-}" ] || tput() { true; }
This checks that the tput command exists (using the bash 'command' builtin rather than which(1) - surprisingly, which can't always be relied upon to be installed even on modern GNU/Linux systems), that stdout is a tty, and that the NO_COLOR env var is not set. If any of these conditions are false, a no-op tput function is defined.This little snippet of setup lets you sprinkle tput invocations through your script knowing that it's going to do the right thing in any situation.
if [ -t 1 ] && [ -z "${NO_COLOR:-}" ]; then
COLOR_RESET='[0m'
COLOR_RED='[31m'
COLOR_GREEN='[32m'
COLOR_BLUE='[34m'
else
COLOR_RESET=''
COLOR_RED=''
COLOR_GREEN=''
COLOR_BLUE=''
fi
For more about this see Unix Shell Script Tactics: https://github.com/SixArm/unix-shell-script-tactics/tree/mai...Be aware there's an escape character at the start of each of color string, which is the POSIX equivalent of $'\e'; Hacker News seems to cut out that escape character.
RED=$(tput setaf 1)
GREEN=$(tput setaf 2)
RESET=$(tput sgr0)It ticks so many boxes:
* Printing non-output information to stdout (usage information is not normal program output, use stderr instead)
* Using copious amounts of colours everywhere to draw attention to error messages.
* ... Because you've flooded my screen with even larger amount of irrelevant noise which I don't care about (what is being ran).
* Coming up with a completely custom and never before seen way of describing the necessary options and arguments for a program.
* Trying to auto-detect the operating system instead of just documenting the non-standard dependencies and providing a way to override them (inevitably extremely fragile and makes the end-user experience worse). If you are going to implement automatic fallbacks, at least provide a warning to the end user.
* ... All because you've tried to implement a "helpful" (but unnecessary) feature of a timeout which the person using your script could have handled themselves instead.
* pipefail when nothing is being piped (pipefail is not a "fix" it is an option, whether it is appropriate is dependant on the pipeline, it's not something you should be blanket applying to your codebase)
* Spamming output in the current directory without me specifying where you should put it or expecting it to even happen.
* Using set -e without understanding how it works (and where it doesn't work).
* #!/bin/bash instead of #!/usr/bin/env bash
* [ instead of [[
* -z instead of actually checking how many arguments you got passed and trusting the end user if they do something weird like pass an empty string to your program
* echo instead of printf
* `print_and_execute sdk install java $DEFAULT_JAVA_VERSION` who asked you to install things?
* `grep -h "^sdk use" "./prepare_$fork.sh" | cut -d' ' -f4 | while read -r version; do` You're seriously grepping shell scripts to determine what things you should install?
* Unquoted variables all over the place.
* Not using mktemp to hold all the temporary files and an exit trap to make sure they're cleaned up in most cases.
Except that'll pick up an old (2006!) (unsupported, I'm guessing) version of bash (3.2.57) on my macbook rather than the useful version (5.2.26) installed by homebrew.
> -z instead of actually checking how many arguments you got
I think that's fine here, though? It's specifically wanting the first argument to be a non-empty string to be interpolated into a filename later. Allowing the user to pass an empty string for a name that has to be non-empty is nonsense in this situation.
> You're seriously grepping shell scripts to determine what things you should install?
How would you arrange it? You have a `prepare_X.sh` script which may need to activate a specific Java SDK (some of them don't) for the test in question and obviously that needs to be installed before the prepare script can be run. I suppose you could centralise it into a JSON file and extract it using something like `jq` but then you lose the "drop the files into the directory to be picked up" convenience (and probably get merge conflicts when two people add their own information to the same file...)
It’s only when things are intended to be reused or have a more generic purpose as a tool that you need them to behave better and in a more standard way.
For better user friendliness, I prefer to have the logging level determined by the value of a variable (e.g. LOG_LEVEL) and then the user can decide whether they want to see every single variable assignment or just a broad outline of what the script is doing.
I was taken back by the "print_and_execute" function - if you want to make a wrapper like that, then maybe a shorter name would be better? (Also, the use of "echo" sets off alarm bells).
This one becomes very apparent when using NixOS where /bin/bash doesn’t exist. The vast majority of bash scripts in the wild won’t run on NixOS out of the box.
Your tone is very dismissive. Instead of criticism all of these could be phrased as suggestions instead. It’s like criticising your junior for being enthusiastic about everything they learned today.
For anyone else not familiar with this term
This made me chuckle.
> Your tone is very dismissive.
I know, but honestly when I see a post on the front page of HN with recommendations on how to do something and the recommendations (and resulting code) are just bad then I can't help myself.
The issue is that trying to phrase things nicely takes more effort than I could genuinely be bothered to put in (never mind the fact I read the whole script).
So instead my aim was to be as neutral sounding as possible, although I agree that the end result was still more dismissive than I would have hoped to achieve.
I think it’s pretty good hygiene to set pipefail in the beginning of every script, even if you end up not using any pipes. And at that point is it that important to go back and remove it only to then have to remember that you removed it once you add a pipe?
Sometimes you should even be using PIPESTATUS instead.
On the scale of care, “the script can blow up in surprising ways” severely outweighs “error messages are in red.” Also, as someone else pointed out, what if I’m redirecting to a file?
In truth when I find myself writing a large "program" in Bash such that shellcheck is cumbersome it's a good indication that it should instead be written in a compiled language.
For the false positives, just put in the appropriate comment to disable ShellCheck's error ahead of that line e.g.
# shellcheck disable=SC2034,SC2015
That stops the warning and also documents that you've used ShellCheck, seen the specific warning and know that it's not relevant to you.
As others have pointed out, you can tune shellcheck / ignore certain warnings, if they’re truly noise to you. Personally, I view it like mypy: if it yells at me, I’ve probably at the very least gone against a best practice (like reusing a variable name for something different). Sometimes, I’m fine with that, and I direct it to be ignored, but at least I’ve been forced to think about it.
Here’s a script that uses real language things like a function and error checking, but which also prints “oh no”:
set -e
f() {
false
echo oh
}
if f
then
echo no
fi
set -e is off when your function is called as a predicate. That’s such a letdown from expected- to actual-behavior that I threw it in the bin as a programming language. The only remedy is for each function to be its own script. Great!In terms of sh enlightenment, one of the steps before getting to the above is realizing that every time you use “;” you are using a technique to jam a multi-line expression onto a single line. It starts to feel incongruous to mix single line and multi line syntax:
# weird
if foo; then
bar
fi
# ahah
if foo
then
bar
fi
Writing long scripts without semicolons felt refreshing, like I was using the syntax in the way that nature intended.Shell scripting has its place. Command invocation with sh along with C functions is the de-facto API in Linux. Shell scripts need to fail fast and hard though and leave it up to the caller (either a different language, or another shell script) to figure out how to handle errors.
https://github.com/containerd/nerdctl/blob/main/extras/rootl...
I have since copied this pattern for many scripts: logging functions, grouping all global vars and constants at the top and creating subcommands using shift.
if [ "$(uname -s)" == "Linux” ]; then
stuff-goes-here
else # Assume MacOS
While probably true for most folks, that’s hardly what I’d call great for everybody not on Linux or a Mac.Though for that snippet I would argue for testing for the command rather than the OS (unless Macs or some other common arrangement has something incompatible in the standard path with the same command name?).
For rarely run scripts, consider checking if required flags are missing and query for user input, for example:
[[ -z "$filename" ]] && printf "Enter filename to edit: " && read filename
Power users already know to always do `-h / --help` first, but this way even people that are less familiar with command line can use your tool.if that's a script that's run very rarely or once, entering the fields sequentially could also save time, compared to common `try to remember flags -> error -> check help -> success` flow.
Use a better programming language. Go, Typescript, Rust, Python, and even Perl come to mind.
I don't think LOC is the correct criterion.
I do solve many problems with bash and I enjoy the simplicity of shell coding. I even have long bash scripts. But I do agree that shell scripting is the right solution only if
= you can solve the problem quickly
= you don't need data structures
= you don't need math
= you don't need concurrencySometimes options are limited to what you know already.
You mean, the complexity of shell coding? Any operation that in a regular language is like foo.method(arg) in shell expands into something like ${foo#/&$arg#%} or `tool1 \`tool2 "${foo}"\` bar | xargs -0 baz`.
Meanwhile, 10 year old Bash scripts I've written still run unmodified.
Winner by a mile (from a software-longevity and low-maintenance perspective at least): Bash
Compare a Python script to a Bash script. If your Python3 script (assuming no dependencies) doesn't work after 6 months I got some questions for you.
(And I don't really get how a 6 month old Python _project_ is likely to fail. I guess I'm just good at managing my dependencies?)
Bun has similar features: https://bun.sh/docs/runtime/shell
Instead, just check $? and whether a pipe's output has returned anything at all ([ -z "$FOO" ]) or if it looks similar to what you expect. This is good enough for 99% of scripts and allows you to fail gracefully or even just keep going despite the error (which is good enough for 99.99% of cases). You can also still check intermediate pipe return status from PIPESTATUS and handle those errors gracefully too.
Oh? I don't observe this behavior in my testing. Could you share an example? AFAIK, if you don't capture stderr, that should be passed to the user.
> "Instead, just check $? and..."
I agree that careful error handling is ideal. However, IMO it's good defensive practice to start scripts with "-e" and pipefail.
For many/most scripts, it's preferable to fail with inadequate output than to "succeed" but not perform the actions expected by the caller.
$ date +%w
0
$ cat foo.sh
#!/usr/bin/env sh
set -x
set -eu -o pipefail
echo "start of script"
echo "start of pipe" | cat | false | cat | cat
if [ "$(date +%w)" = "0" ] ; then
echo "It's sunday! Here we do something important!"
fi
$ sh foo.sh
+ set -eu -o pipefail
+ echo 'start of script'
start of script
+ echo 'start of pipe'
+ cat
+ false
+ cat
+ cat
$
Notice how the script exits, and prints the last pipe it ran? It should have printed out the 'if ..' line next. It didn't, because the script exited with an error. But it didn't tell you that.If you later find out the script has been failing, and find this output, you can guess the pipe failed (it doesn't actually say it failed), but you don't know what part of the pipe failed or why. And you only know this much because tracing was enabled.
If tracing is disabled (the default for most people), you would have only seen 'start of script' and then the program returning. Would have looked totally normal, and you'd be none the wiser unless whatever was running this script was also checking its return status and blaring a warning if it exited non-zero, and then you have an investigation to begin with no details.
> IMO it's good defensive practice to start scripts with "-e" and pipefail.
If by "defensive" you mean "creating unexpected failures and you won't know where in your script the failure happened or why", then I don't like defensive practice.
I cannot remember a single instance in 20 years where pipefail helped me. But plenty of times where I spent hours trying to figure out where a script was crashing and why, long after it had been crashing for weeks/months, unbeknownst to me. To be sure, there were reasons why the pipe failed, but in almost all cases it didn't matter, because either I got the output I needed or didn't.
> it's preferable to fail with inadequate output than to "succeed" but not perform the actions expected by the caller.
I can't disagree more. You can "succeed" and still detect problems and handle them or exit gracefully. Failing with no explanation just wastes everybody's time.
Furthermore, this is the kind of practice in backend and web development that keeps causing web apps to stop working, but the user gets no notification whatsoever, and so can't report an error, much less even know an error is happening. I've had this happen to me a half dozen times in the past month, from a bank's website, from a consumer goods company's website, even from a government website. Luckily I am a software engineer and know how to trace backend network calls, so I could discover what was going on; no normal user can do that.
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
This is based on this SA's answer: https://stackoverflow.com/questions/59895/how-do-i-get-the-d...
I never got why Bash doesn't have a reliable "this file's path" feature and why people always take the current working directory for granted!
readonly SCRIPT_SRC="$(dirname "${BASH_SOURCE[${#BASH_SOURCE[@]} - 1]}")"
readonly SCRIPT_DIR="$(cd "${SCRIPT_SRC}" >/dev/null 2>&1 && pwd)"
readonly SCRIPT_NAME=$(basename "$0")script_dir="$(dirname "$(realpath "$0")")"
Hasn't failed me so far and it's easy enough to remember
Validating parameters - a built in declarative feature! E.g.: ValidateNotNullOrEmpty.
Showing progress — also built in, and doesn’t pollute the output stream so you can process returned text AND see progress at the same time. (Write-Progress)
Error handling — Try { } Catch { } Finally { } works just like with proper programming languages.
Platform specific — PowerShell doesn’t rely on a huge collection of non-standard CLI tools for essential functionality. It has built-in portable commands for sorting, filtering, format conversions, and many more. Works the same on Linux and Windows.
Etc…
PS: Another super power that bash users aren’t even aware they’re missing out on is that PowerShell can be embedded into a process as a library (not an external process!!) and used to build an entire GUI that just wraps the CLI commands. This works because the inputs and outputs are strongly typed objects so you can bind UI controls to them trivially. It can also define custom virtual file systems with arbitrary capabilities so you can bind tree navigation controls to your services or whatever. You can “cd” into IIS, Exchange, and SQL and navigate them like they’re a drive. Try that with bash!
But interactively, I much prefer Unix shells over PowerShell. When you don't have edge cases and user input validation to deal with, these quirks become much more manageable. Maybe I am lacking experience, but I find PowerShell uncomfortable to use, and I don't know if it has all these fancy interactive features many Unix shell have nowadays.
What you are saying essentially is that PowerShell is a better programming language than bash, quite a low bar actually. But then you have to compare it to real programming languages, like Perl or Python.
Perl has many shell-like features, the best regex support of any language, which is useful when everything is text, many powerful features, and an extensive ecosystem.
Python is less shell-like but is one of the most popular languages today, with a huge ecosystem, clean code, and pretty good two-way integration, which mean you can not only run Python from your executable, but Python can call it back.
If what you are for is portability and built-in commands, then the competition is Busybox, a ~1MB self-contained executable providing the most common Unix commands and a shell, very popular for embedded systems.
In some sense, yes, but there is no distinct boundary. Or at least, there ought not to be one!
A criticism a lot of people (including me) had of Windows in the NT4 and 2000 days was that there was an enormous gap between click-ops and heavyweight automation using C++ and COM objects (or even VBScript or VB6 for that matter). There wasn't an interactive shell that smoothly bridged these worlds.
That's why many Linux users just assumed that Windows has no automation capability at all: They started with click-ops, never got past the gaping chasm, and just weren't aware that there was anything on the other side. There was, it just wasn't discoverable unless you were already an experienced developer.
PowerShell bridges that gap, extending quite a bit in both directions.
For example, I can use C# to write a PowerShell module that has the full power of a "proper" programming language, IDE with debug, etc... but still inherits the PS pipeline scaffolding so I don't have to reinvent the wheel for parameter parsing, tab-complete, output formatting, etc...
Wait! The fact that arguments with a leading hyphen are interpreted as options is not bash's fault. It's ingrained in the convention of UNIX tools and there's nothing bash can do to mitigate it. You would have the same problem if you got rid of any shell and directly invoked commands from Python or C.
Re: non-standard tools, if you’re referring to timeout, that’s part of GNU coreutils. It’s pretty standard for Linux. BSDs also have it from what I can tell, so it’s probably a Mac-ism. In any case, you could just pipe through sleep to achieve the same thing.
> …inputs and outputs are strongly typed objects
And herein is the difference. *nix-land has everything as a file. It’s the universal communication standard, and it’s extremely unlikely to change. I have zero desire to navigate a DB as though it were a mount point, and I’m unsure why you would ever want to. Surely SQL Server has a CLI tool like MySQL and Postgres.
You just said everything “is a file” and then dismissed out of hand a system that takes that abstraction even further!
PowerShell is more UNIX than UNIX!
Pythonistas who are used to __dir__ and help() would find themselves comfortable with `gm` (get-member) and get-help to introspect commands.
You will also find Python-style dynamic typing, except with PHP syntax. $a=1; $b=2; $a + $b works in a sane manner (try that with bash). There are still funny business with type coercion. $a=1; $b="2"; $a+$b (3); $b+$a ("21");
I also found "get-command" very helpful with locating related commands. For instance "get-command -noun file" returns all the "verb-noun" commands that has the noun "file". (It gives "out-file" and "unblock-file")
Another nice thing about powershell is you can retain all your printf debugging when you are done. Using "Write-Verbose" and "Write-Debug" etc allows you to write at different log levels.
Once you are used to basic powershell, there are bunch of standard patterns like how to do Dry-Runs, and Confirmation levels. Powershell also supports closures, so people create `make` style build systems and unit test suites with them.
I'm not a fan of powershell myself as the only time I've tried it (I don't do much with Windows), I hit a problem with it (or the object I was using) not being able to handle more than 256 characters for a directory and file. That meant that I just installed cygwin and used a BASH script instead.
PowerShell blows bash out of the water. I love it.
If I wanted the features that pwsh brings I would much rather just pick a language like Golang or Python where the experience is better and those things will work on any system imaginable. Whereas pwsh is really good on windows for specifically administrative tasks.
Both of them should be simply python and typescript compatible dlls.
You can “cd” into IIS, Exchange, and SQL and navigate them like they’re a drive. Try that with bash!
This exists.
PowerShell can be embedded into a process as a library... and used to build an entire GUI that just wraps the CLI commands.
Sounds pretty interesting. Can you tell me what search terms I'd use to learn more about the GUI controls? Are they portable to Linux?The .NET library for this is System.Management.Automation.
You can call a PowerShell pipeline with one line of code: https://learn.microsoft.com/en-us/dotnet/api/system.manageme...
Unlike invoking bash (or whatever) as a process, this is much lighter weight and returns a sequence of objects with properties. You can trivially bind those to UI controls such as data tables.
Similarly the virtual file system providers expose metadata programmatically such as “available operations”, all of which adhere to uniform interfaces. You can write a generic UI once for copy, paste, expand folder, etc and turn them on or off as needed to show only what’s available at each hierarchy level.
As an example, the Citrix management consoles all work like this. Anything you can do in the GUI you can do in the CLI by definition because the GUI is just some widgets driving the same CLI code.
fish, Python, and oilshell (ysh) are ultimately on better footing.
Somehow whenever people dance to the Google code conventions tune, I find they adhere to questionable practices. I think people need to realize, that big tech conventions are simply their common debominator, and not especially great rules, that everyone should adopt for themselves.
Or filenames that contain the number zero :D
#!/bin/sh
#
# Usage : popc_unchecked BINARY_STRING
#
# Count number of 1s in BINARY_STRING. Made to demonstrate a use of IFS that
# can bite you if you do not quote all the variables you don't want to split.
len="${#1}"
count() { printf '%s\n' "$((len + 1 - $#))"; }
saved="${IFS}"
IFS=0
count 1${1}1
IFS="${saved}"
# PS: we do not run the code in a subshell because popcount needs to be highly
# performant (≖ ᴗ ≖ )I get and love the idea but I'd consider this implementation an anti-pattern. If the output mimics set -x but isn't doing what that is doing, it can mislead users of the script.
The author could also consider trapping debug to maybe be selective while also making it a little more automatic.
It provides logging facilities with colour usage for the terminal (not for redirecting out to a file) and also decent command line parsing. It uses a great idea to specify the calling parameters in the help/usage information, so it's quick and easy to use and ensures that you have meaningful information about what parameters the script accepts.
Also, please don't write shell scripts without running them through ShellCheck. The shell has so many footguns that can be avoided by correctly following its recommendations.
sh -x $SCRIPT
shows a debugging trace on the script in a verbose way, it's unvaluable on errors.You can use it as a shebang too:
#!/bin/sh -xI created a small awk util that I used throughout the script to style the output. I found it very convenient. I wonder if something similar already exists.
Some screenshots in the PR: https://github.com/ricomariani/CG-SQL-author/pull/18
Let me know guys if you like it. Any comments appreciated.
function theme() {
! $IS_TTY && cat || awk '
/^([[:space:]]*)SUCCESS:/ { sub("SUCCESS:", " \033[1;32m&"); print; printf "\033[0m"; next }
/^([[:space:]]*)ERROR:/ { sub("ERROR:", " \033[1;31m&"); print; printf "\033[0m"; next }
/^ / { print; next }
/^ / { print "\033[1m" $0 "\033[0m"; next }
/^./ { print "\033[4m" $0 "\033[0m"; next }
{ print }
END { printf "\033[0;0m" }'
}
Go to source: https://github.com/ricomariani/CG-SQL-author/blob/main/playg...Example usage:
exit_with_help_message() {
local exit_code=$1
cat <<EOF | theme
CQL Playground
Sub-commands:
help
Show this help message
hello
Onboarding checklist — Get ready to use the playground
build-cql-compiler
Rebuild the CQL compiler
Go to source: https://github.com/ricomariani/CG-SQL-author/blob/main/playg... cat <<EOF | theme
CQL Playground — Onboarding checklist
Required Dependencies
The CQL compiler
$($cql_compiler_ready && \
echo "SUCCESS: The CQL compiler is ready ($CQL)" || \
echo "ERROR: The CQL compiler was not found. Build it with: $CLI_NAME build-cql-compiler"
)
Go to source: https://github.com/ricomariani/CG-SQL-author/blob/main/playg... rm -rf ${VAR}/*
That's typically a great experience for shell scripts!https://github.com/ValveSoftware/steam-for-linux/issues/3671
rm -rf -- "${VAR}"/*The python excerpt is my favorite example:
```
$ irb
irb(main):001:0> exit
$ irb
irb(main):001:0> quit
$ python
>>> exit
Use exit() or Ctrl-D (i.e. EOF) to exit
```
<quote> Ruby accepts both exit and quit to accommodate the programmer’s obvious desire to quit its interactive console. Python, on the other hand, pedantically instructs the programmer how to properly do what’s requested, even though it obviously knows what is meant (since it’s displaying the error message). That’s a pretty clear-cut, albeit small, example of [Principle of Least Surprise]. </quote>
Still I’ve always used Ctrl+D, which works everywhere unixy.
Python might be surprising, but in this example, it's only surprising once, and helpful when it surprises you. Now you know quitting requires calling a function and that function is named exit() (although amusingly python3 anyway also accepts quit()). And being fully pedantic it doesn't know what you mean, it is assuming what you mean and making a suggestion, but that's not the same as knowing.
From here on I'm not arguing the point anymore, just recording some of the interesting things I discovered exploring this in response to your comment:
You can do this in python (which IMO is surprising, but in a different way):
```
>>> quit
Use quit() or Ctrl-D (i.e. EOF) to exit
>>> quit=True
>>> quit
True
>>> quit()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'bool' object is not callable
>>> exit()
```
But this also gives some sense to python's behavior. `quit` and `exit` are symbol names, and they have default assignments, but they're re-assignable like any other symbol in python. So the behavior it exhibits makes sense if we assume that they're not special objects beyond just being built int.`exit` is a class isntance according to type. So we should be able to create something similar, and indeed we can:
```
>>> class Bar:
... def __repr__(self):
... return "Type bar() to quit!"
... def __call__(self):
... print("I quit!")
...
>>> bar = Bar()
>>> bar
Type bar() to quit!
>>> bar()
I quit!
>>>
```
Interestingly this suggests we should be able to replace exit with our own implementation that does what ruby does if we really wanted too: ```
>>> class SuperExit:
... def __init__(self, real):
... self.real_exit=real
... def __repr__(self):
... print("Exiting via repr")
... self.real_exit()
... def __call__(self):
... print("Exiting via call")
... self.real_exit()
...
>>> exit = SuperExit(exit)
>>> exit
Exiting via repr
```We can include these as well, but each keyword that you include brings diminishing returns at the cost of clutter and inconsistence in the API. Python problematically decides that returns diminish after the first --- “first” according to developers, that is --- possibility in all cases. Ruby anticipates that everyone's first choice will be different and practically maximizes comfort of users.
The proliferation of Python has only made my feelings worse. Try running a 6 month old Python project that you haven't touched and see if it still runs. /eyeroll
My experience has been 6 month of python works fine. In fact, python is my go to these days for anything longer than a 5 line shell script (mostly because argparse is builtin now). On the other hand, running a newly written python script with a 6 month old version of python, that's likely to get you into trouble.
I also recommend you catch if the argument is `-h` or `--help`. A careful user won’t just run a script with no arguments in the hopes it does nothing but print the help.¹
if [[ "${1}" =~ ^(-h|--help)$ ]]
Strictly speaking, your first command should indeed `exit 1`, but that request for help should `exit 0`.¹ For that reason, I never make a script which runs without an argument. Except if it only prints information without doing anything destructive or that the user might want to undo. Everything else must be called with an argument, even if a dummy one, to ensure intentionality.
python has its place, but it's not without its own portability challenges and sneaky gotchas. I have many times written and tested a python script with (for example) 3.12 only to have a runtime error on a coworker's machine because they have an older python version that doesn't support a language feature that I used.
For small, portable scripts I try to stick to POSIX standards (shellcheck helps with this) instead of bash or python.
For bigger scripts, typically I'll reach for python or Typescript. However, that requires paying the cost of documenting and automating the setup, version detection, etc. and the cost to users for dealing with that extra setup and inevitable issues with it. Compiled languages are the next level, but obviously have their own challenges.
Let's focus on solving this then. Because the number of times that I've had to do surgery on horrible bash files because they were written for some platform and didn't run on mine...
Try running a twenty year old BASH script versus a python programme on a new ARM or RISC-V chip.
Or, try running BASH/python on some ancient AIX hardware.
https://stackoverflow.com/questions/12498304/using-bash-to-d...
I think appending an explicit || true for commands that are ok to fail makes more sense. Having state you need to keep track of just makes things less readable.
One rule I like, is to ensure that, as well as validation, all validated information is dumped in a convenient format prior to running the rest of the script.
This is super helpful, assuming that some downstream process will need pathnames, or some other detail of the process just executed.
I'll probably also combine a few git commands for every commit and push.
I also use VMs (qemu microvms) based on docker images for development.
I asked ChatGPT to write it and double checked btw.
if [ -x "$(command -v gtimeout)" ]; then
Interesting way to check if a command is installed. How is it better than the simpler and more common "if command...; then"? if [ -x "$(command -v gtimeout)" ]; then
and if command -v gtimeout >/dev/null; then
The first invokes it in a sub shell (and captures the output), the second invokes it directly and discards the output, using the return status of `command` as the input to `if`.The superficial reason the second is "preferred" is that it's slightly better performance wise. Not a huge difference, but it is a difference.
However the hidden, and probably more impactful reason it's preferred, is that the first can give a false negative. If the thing you want to test before calling is implemented as a shell builtin, it will fail, because the `-x` mode of `test` (and thus `[`) is a file test, whereas the return value of `command -v` is whether or not the command can be invoked.
https://dave.autonoma.ca/blog/2019/05/22/typesetting-markdow...
In effect, create a list of dependencies and arguments:
#!/usr/bin/env bash
source $HOME/bin/build-template
DEPENDENCIES=(
"gradle,https://gradle.org"
"warp-packer,https://github.com/Reisz/warp/releases"
"linux-x64.warp-packer,https://github.com/dgiagio/warp/releases"
"osslsigncode,https://www.winehq.org"
)
ARGUMENTS+=(
"a,arch,Target operating system architecture (amd64)"
"o,os,Target operating system (linux, windows, macos)"
"u,update,Java update version number (${ARG_JAVA_UPDATE})"
"v,version,Full Java version (${ARG_JAVA_VERSION})"
)
The build-template can then be reused to enhance other shell scripts. Note how by defining the command-line arguments as data you can provide a general solution to printing usage information:https://gitlab.com/DaveJarvis/KeenWrite/-/blob/main/scripts/...
Further, the same command-line arguments list can be used to parse the options:
https://gitlab.com/DaveJarvis/KeenWrite/-/blob/main/scripts/...
If you want further generalization, it's possible to have the template parse the command-line arguments automatically for any particular script. Tweak the arguments list slightly by prefixing the name of the variable to assign to the option value provided on the CLI:
ARGUMENTS+=(
"ARG_JAVA_ARCH,a,arch,Target operating system architecture (amd64)"
"ARG_JAVA_OS,o,os,Target operating system (linux, windows, macos)"
"ARG_JAVA_UPDATE,u,update,Java update version number (${ARG_JAVA_UPDATE})"
"ARG_JAVA_VERSION,v,version,Full Java version (${ARG_JAVA_VERSION})"
)
If the command-line options require running different code, it is possible to accommodate that as well, in a reusable solution.