...you're essentially turning the host OS into both a resource manager and isolation boundary enforcer, which is... kind of what hypervisors were specifically designed to do, just at a different level. When the container companies were all starting to come out, I never thought it was a good idea, given what I was building I never said anything because "of course the VM guy would not like containers" - I thought many times about what an ISO+VM "container" product would look like but at the time it would have been hard to match the performance of containers even if we could have gotten the developer experience super good. VM: Cold start: ~10 seconds with an optimized ISO, Management overhead: ~256MB baseline, Consistent performance profile. K8s: Cold start, ~30-50 seconds (control plane decisions + networking setup), management overhead: 1-2GB for the control plane alone, more variable performance due to overlay networking.
imo real question is: at what scale/complexity does k8 overhead get amortized by its management benefits? For a number of services, I suspect it never does. I will dutifully accept all my downvotes now.