I started with MicroK8s, and while it's a functional solution for some use-cases, I was genuinely disappointed in it's overall behavior (esp with regards to small node count clusters, in the 3 to 10 node range).
The biggest hit was Dqlite - Overall I had a tremendous number of problems that originated explicitly with Dqlite. Everything from unexpected high cpu usage, failure to form consensus with even node counts (esp after a network split), manual configuration files that needed to be deleted or renamed to get specific hosts back into the cluster, and generally poor performance for a long term setup (2 year old cluster stalled to basically a standstill spinning on Dqlite).
I have not used Dqlite in other projects, so it's possible this was a Microk8s problem, but based on my experience with Microk8s... I won't touch either of these projects again.
I switched away to K3s about 3 years ago now and have had essentially no problems. Considerably fewer random headaches, no unexpected performance degradation, very stable, incredibly pleasant to work with.
---
I have also migrated about half of my workloads to Longhorn backed PVs at this point (coming from a large shared NAS exposed as NFS) and while I've had a couple more headaches here than with K3s directly - this has been surprisingly smooth sailing as well, while giving me much more flexibility in how I manage my block devices (for context, I'm small - so just under a petabyte of storage, of which ~60% is in use).
If you want to run a cluster on hardware you own rather than rent - K3s and Longhorn are amazing tools to do so, and I really have to give Rancher/SUSE a hand here. It's nice tooling.
A few years back I wrote about it, and most of the core principles of the article are still valid:
https://atodorov.me/2021/02/27/why-you-should-take-a-look-at...
Disclaimer: I work at HashiCorp, but I've had that opinion since before joining and in fact it's among the reasons I joined
I will be super curious to see if IBM Cloud actually does ship $(ibm-cloud-cli create-nomad)
1: I also didn't realize they rug pulled Consul, too; that's just cruel https://github.com/hashicorp/consul/blob/v1.20.5/LICENSE
BSL and similar are the second best thing (after open source licenses), and still drastically better than proprietary/closed source. If it allows someone to pay the salaries of the people developing it, I'm fine with that. And not only because my own salary is one of them - I had the same view for MongoDB, Elastic, Sourcegraph, etc. In the era where massive behemoths can just ship your software as a service for free, companies need to protect themselves as much as they can. Do you ever wonder why there are very few profitable open source companies? Most people would know Red Hat and that's it.
Also, helm gave me a loathing for yaml I never knew I had in me. I avoid some of helm's more vulgar gyrations by using helmfile, but I'm also giving yoke a look.
If it's the whitespace that jams you up, I wanted to point out that since YAML is a superset of JSON, there's nothing stopping you from using text/template to cook JSON manifests. There is, of course, a toJson function available in helm <https://helm.sh/docs/chart_template_guide/function_list/#typ...>
But I'm conceptually with you that the jokers who decided to use a text templating language for a structured output were gravely misguided and now we all suffer
Lately there's also RKE2 (https://docs.rke2.io/) that I've been growing fondness for and it's only marginally more tricky to setup, with the bonus effect of having a more 'standard' cluster distribution and more knobs to twist.
Not that I'd be shy of running K3s in production, but it seems easier to follow 'standard Kubernetes way' for things without having to diff with some of K3s's default configuration choices - which, again, aren't bad at all for folks who do not need all of the different options.
For edge workloads and smaller clusters / less familiar operators that want to run Kubernetes platforms themselves without depending on a managed provider, K3s is pretty impossible to beat.
Write a comprehensive technical blog post targeting small and medium-sized businesses that use Hetzner Cloud and are evaluating lightweight Kubernetes alternatives. Compare and analyze the following solutions: k3s, MicroK8s, Docker Swarm, and Minikube.
Structure:
Start with a title and abstract introducing the comparative scope.
Provide sections such as:
Architectural Requirements, Deployment & Lifecycle, Management, Cost Analysis, Security & Compliance, Developer Experience, Strategic Recommendations and end with a clear conclusion and deployment advice.
Style:
Technical but accessible to DevOps engineers or small startup CTOs.
Use citation-style references for data (e.g., [1], [2], etc.).
Incorporate code examples (Terraform, Helm, CLI snippets).
Use tables for cost comparisons or benchmarks where appropriate.
Mention open-source tools specific to Hetzner (e.g., hetzner-k3s, hcloud Terraform provider).
Tone:
Professional, analytical, and informative. No marketing fluff.
Spoiler for people who actually care about TFA: it's a runaway win for k3s. I've never even heard of minikube being run in production. The setup stuff at the end is a complete handwave though, like "Implement Traefik ingress with Let’s Encrypt via servers.ingress.enabled=true". No serious setup works well with traefik's default LE integration, and even the k3s docs recommend you install cert-manager. Which isn't too awful to set up, but it's still several hours of fiddling if you haven't done it before (pro tip: install it with kubectl, do not use helm. helm and CRDs are still not friends.)