Why build tools into Docker images? Love it or hate it, there are many senses in which Docker is currently the best medium that we have for distributing and running dev tools. Here's an article making that argument: https://jonathan.bergknoff.com/journal/run-more-stuff-in-doc....
I like what Jim Keller (chip architect) says about complexity - that we need to throw away everything and start from scratch to which the interviewer asks, how often? Jim responds that currently chip architectures are re-done every 10 years but it should be more like every 5 years[1].
Just like any evolutionary process, there comes a point of diminishing returns because mistakes made cannot be corrected due to many other things that get piled up on top of it. So, it is difficult to track back. What happens is more shit gets piled up on top just to patch up old mistakes. Like our laryngeal nerve that loops around from the brain, all the way to the cervical area and goes back to the voicebox[2]. It is even more evident in a Giraffe. A good architect wouldn't design anatomy like this. The reason why it is the way it is, is because evolution has no hindsight and marginal cost of undoing the nerve is higher than just slightly increasing the length of the nerve. This is what we do in software. A good architect wouldn't design software like this. Sorry for the diversion, but I just feel so much pain with Docker, Kubernetes, Terraform and a whole load of AWS complexity. Holyshit.
[1] https://www.youtube.com/watch?v=1CSeY10zbqo
[2] https://en.wikipedia.org/wiki/Recurrent_laryngeal_nerve#/med...
It might seem complex, and obtuse, but its turtles all the way down, and everything all the way back to to the original von Neumann architecture could definitely be looked at again and redesigned, but otoh, we're still doing weird shit like dividing our day into 24 hours and 60 minutes and 60 seconds for no other reason than 'its always been that way' and if we had a chance to actually design a sensible time system we could eliminate a lot of the complexity that has piled up on top of that system.
But yeah, no way would I ever want to go back to the way things were before all these layers existed.
The one thing that hasn't changed is commercial (& ego) pressure to make the "one tool to rule them all", which has been pretty much immutable throughout the years, while on the other hand there is the reality of trying to string all of these things together.
Small, self contained tools with good APIs have persisted and will stand the test of time while the monoliths always die eventually (maybe with the exception of excel, which has really managed to cling on).
I don't think kubernetes or docker really have a bright future because they both have an ambition to be "the one tool to rule them all" and they simply can't.
So... it's simpler tools tied together? Are you arguing that simple tools must instead stay apart? Is the developer supposed to build their thing invoking each basic tool one-by-one? Maybe makefiles and shell scripts should also be verboten, then?
Wait a minute, this part is even funnier. So Docker allows you to build your quasi-VMs using all those Unix tools, by talking to Linux that's been put inside your Linux. But apparently this is not what you meant by ‘Unix philosophy’ and it all should be thrown away. Huh.
This has already happened back in the 1990s with Plan 9 from Bell Labs[1] and Inferno[2], but it didn't take due to various factors. With Docker and Kubernetes we are slowly wrangling the current Linux ecosystem to be more akin to Plan 9 environment with private namespaces and networked environment, with "compute" and "storage" servers. But the issues these tools are trying to fix and improve are buried deep in the stack, at the kernel level - fixing them would mean throwing away years of work put into Linux and starting almost from scratch.
Plan 9 was definitely ahead of its time. The original authors learned from their UNIX choices back in the 1970s and looked into the future with a fresh set of eyes. Private process namespaces, lack of a superuser account with full privileges, transparent networking, split of the monolithic system architecture into CPU (compute), fileserver (storage) and terminal with CPU and storage being in the data center on a fast network and terminals being out there used by the users... I would say that this is pretty spot on with 2020s world where cloud and data centers are used almost all the time and end users rely on lightweight (both in a "weight" and in a "computing power" sense) devices to engage with the system at large.
Of course there were missteps as well, nobody's perfect. A byzantine graphical user interface with overfocus on the mouse (today's world overwhelmingly uses touch-based interface instead) probably hinders the newcomers the most. But apart from that, its hard to pick anyhing else that is wrong with the platform in the current world, and most of the things offered at the operating system level are very attractive in today's environment where user applications need to be protected from each other.
Today Plan 9 still lives on[3] and waits for its time to shine. If enough people are fed up with Kubernetes, I would hope that they could see the future elsewhere. There are definitely issues with current Plan 9 ecosystem and its forks that prevent many people to consider it for work, like its graphical interface stuck in the 1990s era when the rest of the computing world evolved and improved. But this can change, new graphical interface can be implemented to scratch someone's itch - that's how most of the current Linux server and desktop environments came to be. So I would say, look at what the OS provides, trim the bad parts away if you don't like them and start working in a fresh environment with good ideas.
[1]: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs [2]: https://en.wikipedia.org/wiki/Inferno_(operating_system) [3]: https://youtube.com/watch?v=6m3GuoaxRNM
Meanwhile we're shipping on five year old containers that everyone is afraid to update and no one remembers how to build them. We're building a skyscraper on a floating pier and trying to reach the moon before the tide changes.
I know this is the way, but I am having a hard time because it just doesn't make sense. This doesn't feel like process. It feels like compounding of reactions. Is it just me? My company? Or is this a general feeling?
You can always do it wrong, it's not always the tech. Your description sounds like frustration mixed with some amount of wrongness. But I do agree that after being in the industry for about 15 years, there seems to be a diminishing returns of sorts of complexity to net org gain with the current "meta".
Always remember, KISS.
Despite dockerfile being essentially shell scripts on a clean basic install... It seems more approachable to developers than ssh and working with real servers.
I always start from an official container base from a reputable source (Debian, Apache, nginx, Alpine)... Or I branch off one of my creations that is based on these... If I want to use someone else's work from an untrusted source, I make my own image and build pipeline for it so I'm in control.
This is my philosophy... I don't have any containers I'm afraid to rebuild... But I don't use kubernetes or rancher, just raw docker, docker compose, and ansible/terraform
K8s is too complicated for my competence and needs.
That's bad. This is not the way to do it. And honestly, this isn't the fault of Docker, Kubernetes, Rancher, or any tech. Sounds like the org is broken...
I've been to a number of places, big and small, and the ability to maintain and build every part of prod has been seen as important everywhere.
I'm pretty picky, though; one of the questions I ask at interviews is whether the potential employer does deployments on Fridays.
i am growing very tired and weary of all these abstractions. i use them because, well, i want people i work for and esteem to succeed. but all this feels like non-sense from engineers at these big techs trying to justify what they did for their performance review.
folks, we are not actually solving much!
i feel like we should start over at this point with everything we learned. but that's just too much to ask because many have invested so much in all this already.
https://access.redhat.com/documentation/en-us/red_hat_enterp...
For a long time I didn't know about:
docker historyThis is inaccurate, assuming you are referring to the 'dockerTools.buildLayeredImage' function in the nixpkgs repository.
That builds docker images declaratively, not dockerfiles.
> Docker is an excellent means of distributing those sorts of tools.
No, package managers are excellent means of distributing and managing installed tools. Docker is an excellent way to package the tools, but it's distribution and management are terrible. There isn't even a command to simply show which containers are considered "outdated" without having to repull all your images, which can take 10+ minutes on 30 images.
That seems like a short term view of things.
So the example in the repo being, you have some set of tools which you may want to run in CI/CD, or maybe every developer to have the same version of (linters, deployment tools, test runners, etc) and this will figure out the optimal way of pulling them so they're available?
Thanks, makes sense now I think.