Just an “idle” Kubernetes system is a behemoth to comprehend…
Kubernetes is etcd, apiserver, and controllers. That's exactly as many components as your average MVC app. The control-loop thing is interesting, and there are a few "kinds" of resources to get used to, but why is it always presented as this insurmountable complexity?
I ran into a VXLAN checksum offload kernel bug once, but otherwise this thing is just solid. Sure it's a lot of YAML but I don't understand the rep.
…and containerd and csi plugins and kubelet and cni plugins and kubectl and kube-proxy and ingresses and load balancers…
Sure at some point there are too many layers to count but I wouldn't say any of this is "Kubernetes". What people tend to be hung about is the difficulty of Kubernetes compared to `docker run` or `docker compose up`. That is what I am surprised about.
I never had any issue with kubelet, or kube-proxy, or CSI plugins, or CNI plugins. That is after years of running a multi-tenant cluster in a research institution. I think about those about as much as I think about ext4, runc, or GRUB.
step one: draw a circle
step two: import the rest of the owl
Do you understand you're referring to optional components and add-ons?
> and kubectl
You mean the command line interface that you optionally use if you choose to do so?
> and kube-proxy and ingresses and load balancers…
Do you understand you're referring to whole classes of applications you run on top of Kubernetes?
I get it that you're trying to make a mountain out of a mole hill. Just understand that you can't argue that something is complex by giving as your best examples a bunch of things that aren't really tied to it.
It's like trying to claim Windows is hard, and then your best example is showing a screenshot of AutoCAD.
For some applications these people are absolutely right, but they've persuaded themselves that that means it's the best way to handle all use cases, which makes them see Kubernetes as way more complex than is necessary, rather than as a roll-your-own ECS for those who would otherwise truly need a cloud provider.
I assume everyone wants to be in control of their environment. But with so many ways to compose your infra that means a lot of different things for different people.
K8s is meant to be operated by some class of engineers, and used by another. Just like you have DBAs, sysadmins, etc, maybe your devops should have more system experience besides terraform.
Sir, I upvoted you for your wonderful sense of humour.
Some bash and Ansible and EC2? That is usually what Kubernetes haters suggest one does to simplify.
Etcd is truly a horrible data store, even the creator thinks so.
For anyone unfamiliar with this the "official limits" are here, and as of 1.32 it's 5000 nodes, max 300k containers, etc.
https://kubernetes.io/docs/setup/best-practices/cluster-larg...
The department saw more need for storage than Kubernetes compute so that's what we're growing. Nowadays you can get storage machines with 1 PB in them.
The larger Supermicro or Quanta storage servers can easily handle 36 HDD's each, or even more.
So with just 16 of those with 36x24TB disks, that meets the ~14PB capacity mark, leaving 44 remaining nodes for other compute task, load balancing, NVME clusters, etc.
Cluster networking can sometimes get pretty mind-bending, but honestly that's true of just containers on their own.
I think just that ability to schedule pods on its own requires about that level of complexity; you're not going to get a much simpler system if you try to implement things yourself. Most of the complexity in k8s comes from components layered on top of that core, but then again, once you start adding features, any custom solution will also grow more complex.
If there's one legitimate complaint when it comes to k8s complexity, it's the ad-hoc way annotations get used to control behaviour in a way that isn't discoverable or type-checked like API objects are, and you just have to be aware that they could exist and affect how things behave. A huge benefit of k8s for me is its built-in discoverability, and annotations hurt that quite a bit.
I would ask a different question. How many people actually need to understand implementation details of Kubernetes?
Look at any company. They pay engineers to maintain a web app/backend/mobile app. They want features to be rolled out, and they want their services to be up. At which point does anyone say "we need an expert who actually understands Kubernetes"?
I have to wonder how many people actually understand when to use K8s or docker. Docker is not a magic bullet, and can actually be a foot gun when it's not the right solution.