That being said I sometimes ask myself why we can’t constantly think KISS and YAGNI. Like, do we really need this level of abstraction and complexity? I’ve been “working” with k8s and I would probably fail any interview on it because I feel like I’m always googling my way through issues. I don’t even care anymore because I know for my own purposes outside of work, I keep my code and systems stupid simple.
And maybe this sounds cringey to some but I’m happy to write a few scripts on my own to handle deployments without needing to break my software into a thousand pieces. Single responsibility code using a few languages that are best suited for the task at hand (in my case it’s mostly node, elixir, go) that’s easy to break apart and ship separately is so nice. Why can’t we do the same at work?
Oh well, I’ll collect my check 2x a month thanks.
What's the best way you've found to do that?
They of course support docker and k8 also. And Azure functions which are like lambda in AWS.
K8s is very modular in my experience so if you don’t need something you can easily ignore it and not pay a complexity cost. Nomad does not seem much simpler to me (especially because you basically have to pair it with Consul and Vault)
I am genuinely curious.
Observation: a side effect of being extensible is that people deploy extensions.
There is some kind of law of complexity budgets, where if you make the simple things easy, people will tend to ratchet up complexity by adding more stuff until the system "just" fits in their heads again.
Bare k8s with a simple ingress path and workload is predictable and nice to admin.
Cluster with lots of extra bits (custom autoscalers, cert-manager, complex ci systems, serverless stuff, custom operators, service meshes) can have lots of "non-local" interactions and seems to lead to environments that are scary to upgrade.
It's all relative.
Hi, Nomad PM here - We've gotten this feedback a lot and have been taking steps to respond to it. We added simple service discovery in Nomad 1.3 and health checks and load balancing shortly after. So you shouldn't need Consul until you want a full service mesh. And then in Nomad 1.4, which just launched, we added Nomad Variables. These can be used for basic secrets & config management. It isn't a full on replacement for Vault, but it should give people the basic functionality to get off the ground.
So going forward we won't have a de factor dependency on these other tools, and hopefully we can live up to the promise of simplicity.
Also in what way is Swarm is abandoned?, I mean if it works fine, and is still supported in Docker-CE than its still OK to use it, at least in small businesses and hobbyist use cases where Swarm's simplicity are attractive.
I would have preferred Nomad but the resource requirements are pretty high for the "control server" component. https://www.nomadproject.io/docs/install/production/requirem...
Nomad servers may need to be run on large machine instances.
We suggest having between 4-8+ cores, 16-32 GB+ of memory,
40-80 GB+ of fast disk and significant network bandwidth.
The core count and network recommendations are to ensure
high throughput as Nomad heavily relies on network communication
and as the Servers are managing all the nodes in the region and
performing scheduling. The memory and disk requirements are due to
the fact that Nomad stores all state in memory and will store two
snapshots of this data onto disk, which causes high IO in busy
clusters with lots of writes.
Obviously This is not going to fit on a group of Raspberry Pi's or other SBC computer nodes you can solar power out in a field.This manages about 100 client nodes. No need for a cluster since we don't need high availability on our control plane, and there's no actual state stored there that isn't created from our CI pipeline.
A bit disappointed with cloudplane here.
Edit: @dollar - good one! Quite plausibly the case.
But jumping right to "Let's look at how Kubernetes works behind the scenes, and why the complexity may be a tradeoff worth making." this would be more useful to me. Specifically, like "why all the pieces" and comparing them to other solutions, which may be challenging.
Kubernetes is way, way to much for many teams to be able to operate properly. It can be done the right way, and it absolutely has it’s use cases, but I see so many people using it that really shouldn’t be.
Everyone else gotta complain about it just because.
Whether or not that extra complexity is necessary or beneficial is what's debatable.
What was shown was the ability of systemd to have restart policies of units and the ability to load secret over some sort of Unix socket primitive. Plus, it does not even try to do topological sorts, it restarts everytime and accepts that its preconditions are false.
And basically as it can do that for any more or less untyped pile of resource, it is "flexible". Sure, void* + a tag is flexible.
K8s is tiring. Reconciliation is not exclusive to K8s, it's not the best system we have, not even close.
It is a particularly popular system with very specific choices, which has a nice property of assuming that state drifts therefore reconciliation is a must.
The annoying part is that: to show everyone else that K8s is complex, it is necessary to build a reconciliation based piece of software that compose well with the rest of the world and prove that you don't need K8s to achieve the same features that most people use, except if you are $bigcorp. Alas, people have finite time and I do think it is quite clear how to build this using more fundamental pieces such as systemd and more.
Making this kind of article even more frustrating because I get the good intent of convincing people that K8s is not frightening and complicated. I really feel there is a lack of theory and research definitions in this area of computer science. Rigor is missing.
What I was told is that it doesn't scale and k8's is simpler because how does it talk to the database otherwise? Oddly enough, I'm not sure this person has ever _just_ worked with containers without k8's and so it all falls into a black box.
Which is odd, but all of this is to take roughly 100 servers and get them into the cloud.
At some point I have to wonder if it's even possible for many of these same people to work in a way that's simple.
The worst wasn’t the system literally was designed in a broken manner. It was the rudeness of anyone seemingly involved in the project.
But hey maybe that’s changed now.
- Let me explain why to you with something even more complex because, hell, at least I understood this and you still haven't. :)