Containers have been around for a LONG time, Solaris, jails, cgroups, etc are all built-in to the kernels we use today.
You don't need to use docker.
The idea is fungible services, whether it's literally just a container that starts with a go binary I can quickly scale 1000s of COMPLETELY independent processes and ORCHESTRATE THEM over thousands of clusters from one centralized system.
If I need to shift 1000s of that one go binary to US-WEST-1 because US-EAST-1 is down I can run automate it or run one command based on a kubernetes tag label and shift traffic.
These are just a few of the massive benefits we get with containers.
I can deploy an ENTIRE datacenter with a yaml file. My ENTIRE companies infrasture MTTR (mean time to recovery) from a total outage, starting from a github repo is less than 35 minutes and we're a billion dollar company and 80% of that time is starting load balancers and clusters. The only NOT agnostic hardware stuff in any of this are the load balancers and network related things as each provider has its own apis, IAM/Policies, etc that are completely unique between providers/datacenters. Nothing cares about what ram, distro, cpu or anything else is being used, we can deploy anywhere ARM or x86.
Without containers I would need a $150k F5 load balancer to distribute load between a ton of $30k dell poweredeges (and I'd need this x1000's).
I've been in Infrastructure for 15+ years at massive scale, webhosts, cdns, I do NOT want to go back to not using containers ever. None of my team writes any non container code or infra. The FIRST thing we do in every single repo is make a dockerfile and docker-compose.yml to easily work on things and every single server any company has in the last decade of my SRE career we've migrated to containers and never once regretted it.
No comments yet.