The value I get from Kubernetes, is the API platform that allows my system to organically become more complex overtime, but hopefully in an organized way.
As someone working on the marketing side of the endeavor, I see the shift the author describes. there are a handful of PaaS companies picking up where Heroku left off and in the enterprise world devops is evolving toward "platform engineering". Platform engineering suffers from being poorly defined, but there appears to be growing demand within large enterprises for something like internal-heroku. But there's still a problem.
To me, the problem is not kubernetes. The problem is that tooling has become so specialized that the focus of work has become integration between tools. And that cumulative integration work complicates the operational responsibilities of software developers. Even if you have a dedicated devops team, the complexity of those integrations flows down to developers in the form of different systems for logging, monitoring, firewall, cdn, ci/cd, secrets, etc.
I haven't looked in a while, but I recall at one time there being a utility that could take a Docker Swarm yaml and convert it to Kubernetes - Kompose? Has that matured into something useful?
There is no good solution here as in order to have a SIMPLE platform, you need to really reduce the amount of use-case you support to a couple options only. The reality is that every single company I have worked for has a ton of different services with different requirements. Those choices end up being reflected on the platform that now needs to be able to run all of them. If you have a single simple use-case then you should just run that simple use-case on a boring EC2 instance with a boring dockerd and a boring NLB in front (as an example) but the reality is that this is almost never the case.
If you have a lot of different workloads to run at scale there is no better tool than Kubernetes. The complexity comes from its power to run almost anything you want
From a business perspective it can, and most of the time does, benefit an organization to outsource app deployment.
I’ve seen first-hand organizations that have gigantic teams of DevOps working endlessly on perfecting these systems, all the while taking teeth pulled for an app developer to get a new container deployed.
The app developer doesn't see it, but behind every decent app deployment in kubernetes, there are millions of hours of expertise in running distributed applications reliably. And the simpler deployment platforms do a good job of abstracting some of this, but from what I've seen nobody does all of that exceptionally well, and if they did, they could probably be able to charge whatever they wanted to people wanting to use it.
Of course, in the latter case, you might end up in the predicament where the solution's creators had different opinions than you. And then you either live with the disagreement, or circle back to hiring people that know how to use the complex big box of tools.
Other articles have zeroed in on the missing piece: that unusual intersection of a product mindset with infrastructure expertise.
Without that product mindset, "platform" is just another word for "kubernetes". It's there because it is the popular thing to do or because no one gets fired for implementing it
A lot of developers will throw compute resources at problems rather than solving within their code. I get it, that is the faster route in almost every case. They won't get hired or promoted if they don't use the current "new hottness" in our industry. This leads to insane cloud spend and super complex deployment setups.
Businesses and VC have been trying to obsolete the sysadmin role on the team for over 15 years now. They shove this responsibility onto developers and expect them to get paid to effectively two jobs for the price of one. We are simply overloaded with work and not having to deal with getting your code into production is something that some developers don't want to own and want an easy button to press to do this job. This is what is driving the want for a "Platform Experience". All of us in the tech community are paying the price with more work responsibilities and higher expectations from the C suite to deliver features faster.
I am all for making Ops more easier for everyone on the engineering team to consume but, not at the complexity price we are currently paying.
I see this often mentioned in the context of k8s and find it baffling that it gets repeated. K8s is simply swapping a form of lock in, to the abstract layer.
The core/control plane components aren't interchangeable but a lot of the "userland" can be swapped around (I have cri-o, Calico, and a homemade CSI on libvirt VMs at home and it runs workloads the same as containerd, AWS VPC CNI, EBS CSI at work)
However, it's not open in the way there's a drop in replacement
Am I (and Betty Junod, and Seth Vargo) wrong about the desirability of Heroku's interface? Was Cloud Foundry far worse than Kubernetes, or did it just not get traction?
I admit that's a pretty loaded question, so I should give context: I dealt with Cloud Foundry from the user side, not the admin side. This was around 2016, and I had very little exposure to what else was going on with it. But I was impressed that they some key elements of Heroku-style PaaS seemingly working fine.
So if it was far harder to get going than Kubernetes, that's a good explanation. But I'm guessing there's much more to it than that.
I think nowadays building on top of Kubernetes has completely eclipsed the CF stack. There's a buildpacks CNCF incubator and you can get k8s pretty close with different tools/abstractions.
Cloud Foundry isn't like Heroku, it's more like "build your own AWS" or maybe "build against this shim layer so you can make a credible threat to migrate when your cloud provider jacks up their prices".
The target audience isn't application developers, it's CTOs and CFOs.
The reason IMO that Cloud Foundry didn't catch on is that from the admin side, deployment of vanilla CF was a nightmare, and maintenance could be challenging.
You could pay for an Cloud Foundry distribution with guardrails, but that ended up being $$$ and still required having employees with systems/ops knowledge if you weren't just deploying to the cloud provider du jour anyway.
I think CF had its golden years before k8s really started to catch on. Then there was a push by a lot of companies to bring CF to k8s (I was working at one such company as it went from the Cloud Foundry Distribution -> Cloud Foundry Distribution on K8s journey). I think these could have caught on if someone could have made it work in a truly seamless way, but unfortunately all the abstractions for bridging CF concepts to a K8s world developed in pursuit of this goal ended up being leaky, and requiring your customers to have CF domain knowledge atop K8s domain knowledge to successfully deploy and maintain their in-house CF, which proved to be too tall an order.
Unfortunately, CF seems to have fallen by the wayside now (though I'm sure there will still be companies running it for 20 years if their needs don't outgrow the capabilities of their mostly-working deployments) and AFAIK nothing has really taken its place.
I haven't had the opportunity to try this yet, but Kubevela (https://www.cncf.io/projects/kubevela/) looks like it could be the closest thing to a modernized CF-like experience on top of K8s that should (hopefully) be more sensible to deploy and manage. Would really love to hear thoughts from people who have used both though.
I wish Hashicorp would release Nomad under a better license. It's a superior system in every way except mindshare.