To solve this, I created a small utility that lets you prefix any command with "dotenv" to load the ".env" file.
This is how I imagine dotenv would work if it had started as a UNIX utility rather than a Node.js library.
As an example of the difference, dotenv is useful for running programs inside Docker containers — which do not inherit your interactive shell's environment variables — whereas direnv isn't particularly useful there. Ditto for programs run via init systems like systemd or even classic SysV init. On the other hand, direnv is convenient for end-user env var config, since it's aware of your shell's working directory and updates the env vars based on it without needing to run extra commands.
`env -S "$(cat .env)" <cmd>`
Believe it or not that’s all you need.
> S, --split-string=S process and split S into separate arguments; used to pass multiple arguments on shebang lines
edit: forgot the quotes around shell substitution
sh -c '. .env; <cmd>'
There is a way to pass commands to it which are reliably executed, like thisL sh -c '. .env; "$@"' -- command arg1 arg2 arg3.
The non-option arguments passed to the shell are available as `"$@"`. A command consisting of nothing but `"$@"` basically executes the arguments. We can use `exec`, speaking of which: sh -c '. .env; exec "$@"' -- command arg1 arg2 arg3.
What I'm getting at is that this form is fairly easily exec-able; execl("/bin/sh", "/bin/sh", ". .env; exec \"$@\"", "--", "command",
"arg1", "arg2", "arg3", (char *) 0);
The command and arguments can be arbitrary strings, not subject to any shell mangling.The nice thing about utilities like env and dotenv is that they can be easily exec-ed:
execl("/usr/bin/dotenv", "/usr/bin/dotenv", "command", "arg", (char *) 0);
-S is a fairly recently added option to the GNU Coreutils env (possibly inspired by BSD?). I have a window to an Ubuntu 18 VM where it's not available.You want $(cat .env) quoted, as in "$(cat .env)" so that the content of the file is reliably passed as one argument.
-S will split on whitespace; but it respects quoting, so spaces can be protected. Basically .env has to be prepared with the features of -S in mind. Of which that thing has quite a few: escape sequences like \n, commenting, environment variable substitution.
https://lists.gnu.org/archive/html/coreutils/2021-10/msg0000...
It came up in the mailing also this March. I saw the posting in my inbox and proposed that a null-terminated format be handled, which is exactly like /proc/<pid>/env:
https://lists.gnu.org/archive/html/coreutils/2024-03/msg0014...
If that feature were available, this becomes
env -f .env command arg ... export $(cat .env | xargs)
Agree with the premise but this can be achieved with actual Unix concepts no need for anything else.The language runtime dotenv projects are banned in my engineering org.
source <(cat .env | xargs)
or: export $(cat .env | xargs)
And then: unset $(cat .env | cut -d= -f1)
?The last one unsets the environment variables that were set by the first command, ensuring they are not persisted beyond the current shell session.
If you are worried about forgetting to execute it, there are a couple of ways to work around it, depending on your case.
env $(cat .env) [program]Would love to hear more about why dotenv is banned at your org though.
I believe in convention over configuration. Most of our apps have hard-coded config, with a concise/short and finite number of things that can be overridden (like 3-4 parameters, tops). Secrets get injected.
I do subscribe to the idea of the 12 factor app, but there is a line that needs to be drawn between env config which is more dynamic and more persistent config that should be baked in to the release.
sh -c '. .env; echo $MY_VAR'
do the same thing? (I am not in front of a shell at the moment.)
I have this on my .bashrc:
alias loadenv='export $(xargs <.env)'
source: [1]--
loadenv() {
set -a
source ./.env
set +a
} sh -ac '. ./.env; ./prog'
Also if you use the `.` builtin it's a good idea to specify the path with a slash in it, so that `.` doesn't search $PATH first.The xargs idea made me think of using bash as the parser :
bash -c "exec -c bash -c 'source $CONFIG/main.bash; env'"
This test .bash file contains multiple source-s of other .bash files, which contain a mix of comments, functions, set and env vars - just the env vars are exported by env.
This seems useful e.g. for collating & summarising an environment for docker run -e.This outputs the env vars to stdout; for the OP's purpose, the output could be sourced :
envFile=$(mktemp /tmp/env.XXXXXX);
bash -c "exec -c bash -c 'source $CONFIG/main.bash; env'" > $envFile;
env $(cat $envFile) sh -c 'echo $API_HOST'
# For Bourne shell, use env -i in place of exec -c :sh -c "env -i sh -c '. $CONFIG/main.sh; env'" > $envFile
envup() {
local file=$([ -z "$1" ] && echo ".env" || echo ".env.$1")
if [ -f $file ]; then
set -a
source $file
set +a
else
echo "No $file file found" 1>&2
return 1
fi
}You can also specify `envup development` to load .env.development files should you want. Obviously this will pollute the current shell but for me it is fine.
It's a shame that running modern software requires carefully packaging a virtual environment and then injecting a bunch of ugly global env vars.
I still think Docker shouldn't exist. Programs should simply bundle their dependencies. Running a program should be as simple as download, unzip, run. No complex hierarchical container management needed.
Alas I am not King.