Your C++ example is orthogonal to the deployment aspect because it discusses the application. Kubernetes and the fragile shell scripts are about the deployment of said application.
How are you going to deploy your C++ application? Both options are available, and I would wager that in most cases, Kubernetes makes more sense, unless you have strict requirements.
A "C++ monolith" allows me to potentially bypass a lot of this deployment stuff because it could serve lots (millions) of users from a single box.
No it's not. You can use it to run bunch of monoliths too. K8s provides a common API layer that all of your organisation can adhere to. Just like containers are a generic encapsulation of any runable code.
No it doesn't. Let's assume you write that application as a C++ monolith. Congratulations, you now have source code that could potentially serve 10k users on a toaster... If only you could get it onto that toaster. How are you going to start the databases it needs? How are you going to restart it when it crashes, or worse: When it still runs but is unresponsive. How are you going to upgrade it to a new version without downtime? How are you going to do canary releases to catch bugs early in production without affecting all users? How do you roll back your infrastructure when there is an issue in production? How do you notice when your toaster server diverges from it's desired state? How do you handle authorization to be compliant with privacy regulations? I'd love to see that simple and safe shell script of yours which handles all those use cases. I'm sure you could sell it for quite a bit of money.
What you fail to understand is that k8s never was about efficiency. Your monolith may work at 10k users with a higher efficiency but it can never scale to a million. At some point you can't buy any bigger toasters and have no choice but to make a distributed system.
Besides, microservice vs monolith is orthogonal to using k8s.
I use Convox [1] which makes everything extremely simple and easy to set up on any Cloud provider. They have some paid options, but their convox/rack [2] project is completely free and open source. I manage everything from the command-line and don't use their web UI. It's just as easy as Heroku:
convox rack install aws production
convox apps create my_app
convox env set FOO=bar
convox deploy
You can also run a single command to set up a new RDS database, Redis instance, S3 bucket, etc. Convox manages absolutely everything: secure VPC, application load balancer, SSL certificates, logs sent to CloudWatch, etc. You can also set up a private rack where none of your instances have a public IP address, and all traffic is sent through a NAT gateway: convox rack params set Private=true
This single command sets up HIPAA and PCI compliant server infrastructure out of the box. Convox automatically creates all the required infrastructure and migrates your containers onto new EC2 instances. All with zero downtime. Now, all you need to do is sign a BAA with AWS and make sure your application and company complies with regulations (access control, encryption, audit logs, company policies, etc.)I run a simple monolithic application where I build a single Docker image, and I run this in multiple Docker containers across 3+ EC2 instances. This has made it incredibly easy to maintain 100% uptime for over 2 years. There were a few times where I've had to fix some things in CloudFormation or roll back a failed deploy, but I've never had any downtime.
My Docker images would be much smaller and faster if I built my backend server with C++ or Rust instead of Ruby on Rails. But I would absolutely still package a C++ application in a Docker image and use ECS / Kubernetes to manage my infrastructure. I think the main benefit of Docker is that you can build and re-use consistent images across CI, development, staging, and production. So all of my Debian packages are exactly the same version, and now I spend almost zero time trying to debug strange issues that only happen on CI, etc.
So now I already know I want to use Docker because of all these benefits, and the next question is just "How can I run my Docker containers in production?". Kubernetes just happens to be the best option. The next question is "What's the easiest way to set up Docker and Kubernetes?" Convox is the holy grail.
The application language or framework isn't really relevant to the discussion.
[2] https://github.com/convox/rack
P.S. Things move really fast in this ecosystem, so I wouldn't be surprised if there are some other really good options. But Convox has worked really well for me over the last few years.
I think for the average application there's still something to be said for manual cross-layer optimization between infrastructure, application, and how both are deployed.
What I mean is we can't yet draw too clear a line between the application and how it's deployed because there are real tradeoffs between keeping future options open and getting the product out the door. A strength of kubernetes is that if you get good at it it works for a variety of projects, but a lot of effort is needed to get to that point and that effort could have gone into something else.