More info: https://nixos.org/
If you want to reach the areas where you can really improve your productivity, you're gonna have to take weeks or even months to learn something from the bottom up. There is no way around it, and it's the same in many industries. There is no "30 minutes to get more productive than anyone else" in reality, only hard work, understanding and application of your knowledge in the real world.
I can't imagine anyone using both and thinking Nix is more complicated that Docker. And it's not close.
> at least it's not that widely used like Docker is
"Which has more users" would not enter the top ten reasons I'd choose between tools like this.
OTOH you don't have to write all this boilerplate code that's suggested in the article and your Nix environment is truly reproducible, whereas rebuilding a Docker image might not reproduce it faithfully. (Try running `apt-get install` in a Ubuntu container without running `apt-get update` first.) On top of that, if you've ever had to use more than two languages (+ toolchains) inside a single project, maybe even on multiple architectures[0], you'll appreciate Nix taking care of the entire bootstrap procedure.
[0]: Lots of dependencies that are easy to install on x86 need to be installed/compiled by hand on arm64.
They certainly should monetise, but not making it clear is what I object to. I've raised as an issue for clarification in their community wiki.
Not sure what needs clarification here, it's pretty up-front about it's mission and features already.
> Your snapshots and abilities to rollback etc are likely to be dependent on their storage servers
Not sure where you get this from. Snapshots are stored locally unless you specify otherwise, and then you'll get to chose whatever storage servers you want to use. Absolutely no "hidden" costs with Nix as it's a MIT licensed project and I don't think they even offer any "value-added" services nor paid customer support.
Edit: reading the issue you just created (https://github.com/nix-community/wiki/issues/34), I'm even more confused. "Given the need to monetise" is coming from where? Nowhere do they say that they have to monotize but don't know how/where, so where do you get this from? Not everything is about money.
The infrastructure costs are covered by sponsors. The 'They should monetise' was a 'they' that I now understand are external organisations rather than nixos themselves.
Just because you are not familiar with his setup doesn't mean it's not at feature parity and more.
Hell, he might even be running intellij in docker if he wishes.
Saying this as I have a similar setup with emacs.
IDE is an Integrated Development Environment. So I think strapping a few tools together with no real influence over one another doesn't seem to constitute an IDE.
Unless you wanna argue that putting these tools into a Docker image makes that container an IDE, then I don't know, maybe? Whatever floats your boat in the end.
Using a JetBrains product you pretty much just pay for the support and "works out of the box" features. You can roll your own LSP, AST Analyzers, shell scripts that bundle it all together and call it an IDE, but I would still be on the side saying it's just a bunch of tools and they're not "integrated"
Alternatively, maybe it’d be possible to have the container expose an IDE over http (possibly vscode through the browser?).
Just be aware that means the musl libc, which is often fine, but not always. Software that expects glibc can crash or have unpredictable behavior. The JVM is a good example, unless you get a JVM that was originally built against musl.
And sometimes also issues with busybox, where it differs from other implementations of the same tools.
If you know where to find Alpine-compatible wheels, or host your own, Alpine has no build-speed penalty.
1. Using a :delegated or :cached flag when using a bind mount can speed it up a bit
2. For folders that need a lot of RW, but don’t need to be shared with the host (think node_modules or sbt cache), I bind a docker volume managed by docker-compose. This makes it extremely fast. Here's an example: https://gist.github.com/kamac/3fbb0548339655d37f3d786de19ae6...
But if you do like the idea of docker dev environemnts, check out a tool like batect: https://github.com/batect/batect It's somewhat like if docker-compose had make-like commands you could define. Your whole dev environment and workflow can be defined in a simple yaml config that anyone can use.
Ashley Broadley's github page at https://github.com/ls12styler sadly doesn't contain a repo with his rust dev work to date (I will ask him as it has some really good stuff in the article.)
----
Very nice. I'm doing similar at the moment. Maybe take a look at
https://www.reddit.com/r/rust/comments/mifrjj/what_extra_dev...
A list of useful cargo built in and third party sub commands.
As you note, common recommended app crates (source) should be gathered separately.
I have several other links and ideas eg supporting different targets such as x86_64-linux-unknown-musl but too long for this post!
One thing that bugs me is that I can't (or don't know how) get my current state into a Textfile, from which I can reproduce.
It's also not fun for embedded development. Guess what, I need to access USB devices, serial, mass storage, hid - super annoying with this setup.
1. If you want X11 (haven't figured out audio yet)
"-e DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix:ro"
2. Firefox
> --shm-size=2g
and start with: firefox --no-remote
3. Entering container
Just map a command to enter the container with the name as parameter / optional image type as second.
That way you get a new fresh environment whenever you want
Command would just start if it's not started or exec otherwise. I go extra length to have docker start up a ssh server inside container and I just ssh into it.
Additionally, we use a wrapper script to symlink the current containerised project to the same location in the host system. This ensures, that outputs in the containerised environment points to valid paths in the host:
E. g.: docker mount: /home/me/dev/proj1234 > /workspace
symlink in host: ln -sfn /home/me/dev/proj1234 /workspace
I like the idea of using k8s as suggested upthread. I just have not had much time to push changes / work on it recently. One thing worth thinking about is i have moved to podman - seems a lot slower to start up but is user space which seems sensibke
Unless you have a Linux-based operating system, Docker behaves very poorly.
NB: The Hyper-V backend behave a bit less poorly than the WSL backend.
I've found that Docker Desktop uses a lot of Disk I/O whenever you use volumes, or pull an image, or anything else that touch the hard drive.
There are two parts of the dev environment - the programmer preferences and the project libraries and other infrastructure. What I would like is to have a way to compose those two and ideally something that would work the same way inside a docker container as in a full VM.
To provision stuff _inside_ your docker container from ansible I've found packer is the easiest way to do it: https://www.packer.io/docs/provisioners/ansible-local There was apparently a tool called ansible-bender that did something similar but was abandoned. Packer makes it easy to define a container that's provisioned with a local ansible playbook.
Ultimately though I think using ansible with containers is a code smell. If you provision in a container with ansible you have to pull in an entire Python install, and that blows up your container size fast. You can do multi-stage builds and carefully extract the stuff you need but it's a real pain. IMHO minimal shell scripts that run in very tightly defined and controlled environments of the dockerfile (i.e. they don't have to be super flexible or support any and every system, just this container) are the way to go.
Mounting things in the right locations is a nightmare, even minor changes becomes a hassel. For Ansible, just learn to use virtualenvs.
Terraform may be a little better.
I'm curious if there are other benefits to this approach though besides just saving time when setting up a new machine. The article mentioned "you end up with (in my opinion) a much more flexible working environment." Any ideas what they might mean?
There's all kinds of little benefits that don't seem that important until you have use for them. Of course Guix and Nix go closer to being actually reproducible, but Docker is better than nothing.
What are the benefits? Are there down sides to being operating in the docker container for everything?
Can do the same but having access to host easier and so to hw devices.
Moving it around my config is easy as having dotfiles around
If you really need a "container", debootstrap + systemd-nspawn does the job and provides much better sandboxing with 10x less complexity.
You don't need Docker or Nix.