We either were using someone else's prebuilt orchestration for something like ELK (insecure, needs constant auditing to be OK) or rolling it ourselves (very expensive in engineer time). None of it was ever working 100% and that was because we were jumping at software packages no one had really taken the time to fully understand. The mentality was "it's containerized!" which many on my team took to mean "we don't need to really grok it, it's in a container!" That burnt us, both on our TIG and ELK stacks. I left that job because it became putting out dumb fires that were not business-justifiable.
All-in-all I'm not saying what anyone is doing is wrong, I'm just saying that if you're going for an orchestrated environment like this you have to have a very mature team. You have to really care about learning these services well, and you have to be careful to not let your own architecture take your time away from solving real problems for the business.
The team I was on did not have that maturity outside of a couple bitter/broken ops guys who didn't deserve what the team had done to them while buzz-word driven leadership gutted their very-proven and stable VMWare infra into a total cluster-f K8s setup because "that's what we're suppose to do in 2018! That's what the new engineers want to work in!"
> the "operating system" has moved up a stack
Splitting hairs: The OS is still the same. The "stack" is newly imposed abstraction on-top of already established paradigms where we are trying to abstract ourselves away from the OS. It's distributed compute more than it is the "OS moving up a stack".
Edit: Ha I think you may have edited your comment with the Coinbase article. That article is actually what I point people to when explaining that K8s isn't some golden bullet, I personally think Coinbase is a great compromise in leveraging containers without going off of the rails (as they write about, ex: talking about the need for dedicated "compute" teams etc).