Name two?
fresh state/ no performance penalty (AMI+autoscaling) Document dependencies, not reinvent (Packer file) Sandboxing (same) Always use same standard port (easier with VMs as 1:1 map)
I know most people think that containers/docker/whatever new stack does these things better and they may be right. The benefits however don't outweigh the costs in weaker toolset and less mature stack.
For my use cases, the biggest problem is that containers don't solve the "where does this run" question. Whenever I ask this, people loudly exclaim "anywhere!" which is the same as "I don't know" to me.
AWS AMIs run in 11 regions x N AZs around the world. This solves a much bigger technical problem for me than "it's lighter weight and easier to do incremental releases on top of" which seem to be the only things in favor of containers.
Many people, including Amazon, say "run containers on VMs!" This seems unnecessarily complex for little additional gain.
I'm really curious if the containerization folks are using Packer and if not why not.
I am not locked into a 1:1 tenancy between applications and instances (though I could have it if I wanted). Multitenancy is trivial. I have the ability to spin up new instances of my applications to combat spike loads or instance failures in single-digit seconds rather than in minutes. My developers can run every container within a single boot2docker VM instead of incurring the overhead of running six virtual machines. It's easier to integration-test because my test harness doesn't have to fight with Amazon, but can rather use a portable, cross-service system in Mesos. In addition, I don't have to autoscale with the crude primitive of an instance in production. Multitenancy means that I can scale individual applications within the cluster to its headroom, and only when the entire cluster nears capacity must I autoscale. I can better leverage economies of scale, while allowing me to leverage more vCPU power to applications that need it (running two dozen applications on a c3.8xlarge is very unlikely to bring to bear at any given time less computational performance to a given application than running each application on its own m3.medium).
I could do this without containers and with only Mesos. It would be worse, but I could do it. I could not do this at all with baked AMIs and instances without spending more money, doing more work, and being frustrated by my environment. I know this I've built the same system you describe (I preferred vagrant-aws because when something broke it was easier to debug, but we moved to Packer before I left) and I would never go back to it. It was more fragile and harder to componentize than a containerized architecture with a smart resource allocation layer. The running context of a container should be "anywhere", and it should be "I don't know", and you caring about that is a defect in your mental model.