> You continue to be disingenuous and imply that every application requires 10k lines of code to run on k8s.
Let me clarify and state unambiguously that it won't necessarily take 10k lines of code to run any random application on Kubernetes.
You can, in fact, deploy Prometheus without using the Prometheus operator and you'll technically be "running your monitoring" within k8s. It just isn't likely to be very reliable or useful. :)
> But that's exactly how it worked. I wrote 115 lines of yaml and had multiple environments, load balancers, health checks, and rolling deployments.
If you already had a fully "stateless", self-healing capable application running on not-k8s, and your layout is as simplistic as "2 services with load balancers", you can probably move to Kubernetes with a comparatively small amount of fuss. If your existing setup was pretty tiny, this may have been a worthwhile project.
If you didn't already have a stateless, self-healing-capable system, and you didn't change your application to accommodate it as part of the port, then regardless of what Kubernetes reports about your pod state, you don't have a self-healing application.
The barrier between application and platform is artificial. They must work together. It's sort of a convenient fantasy that you can try to demarcate these areas. You can't just take any random thing and throw it on Kubernetes and say it's all good now because you can watch k8s cycle your pods.
Maybe you think this is implicit, but as someone who has spent the last 2.5 years building out k8s clusters for software written by average developers, I can assure you that there are a great deal of people who aren't getting this message.
I went full-time freelance about a month ago. One of the last in-house k8s services I deployed, the guy told me, "Oh yeah, we can't run more than one instance of this, or it will delete everything." Yet, these people are very proud of the "crazy scalability" they get from running on Kubernetes. Hope the next guy reads the comments and doesn't nudge that replicas field!
If you already had a non-trivial system that worked well for failover, recovery, self-healing, etc., why'd you replace it with something that is, for example, still just barely learning how to communicate reliably with non-network-attached-storage, as a beta feature in 1.10 [0], released last month? There are many things that sysadmins take for granted that don't really work well within k8s.
I accept that at first glance and with superficial projects, it can be easy to throw the thing over the fence and let k8s's defaults deal with everything. This is definitely the model and the demographic that Google has been pursuing. But if you have something more serious going on, you still have to dig into the internals of nginx and haproxy within your k8s cluster. You still have to deal with DNS. You have to deal with all the normal stuff that is used in network operations, but now, you're just dealing with a weirdly-shaped vaguely-YAMLish version of it, within the Great Googly Hall of Mirrors.
Once you do that enough, you say "Well, why am I not just doing this through real DNS, real haproxy, real nginx, like we used to do? Why am I adding this extra layer of complication to everything, including the application code that has to be adapted for Kubernetes-specific restrictions, and for which I must write <INSERT_ACCEPTABLE_LINE_NO_HERE> lines of code as an operator to ensure proper lifecycle behavior?"
Most people aren't willing to give themselves an honest answer to that question, partially because they don't really ask it. They just write some YAML and throw their code over the fence, now naively assured that the system is "self-healing". Then they get on HN and blast anyone who dares to question that experience.
[0] https://github.com/kubernetes/features/issues/121