1. Choosing OS / flavor / architecture (Debian 7 x64),
2. Setting up networking (private network with its own IP address),
3. Setting up SSH access ($ vagrant ssh),
4. Kicking off provisioning (Chef/Puppet/Salt/Ansible/Bash)
With Vagrant, if you're playing around in your VM and accidently screw something up and can't recover [0], you simply "$ vagrant destroy" and "$ vagrant up" and you'll have a working VM again.
Couple Vagrant with Puppet, and if you destroy/up you'll have your VM go through the process of installing all software and settings and everything for you, meaning your full, working environment is back up and running within minutes, just as it was before you screwed it up.
[0] I experienced this several times when my apt-get stuff would screw up and I couldn't install/remove/purge anything. Google and #debian never let me recover.
Docker and Vagrant are convenient for building (and distributing) consistent environments on top of (potentially wildly) different software or hardware platforms. Docker has a ton of other tricks up its sleeve (one example is "layers:" http://docs.docker.io/en/latest/terms/layer/).
Chef/Puppet/Salt/et al. are configuration management tools which enable programmatic definition of infrastructure. The key point is: you define how the system should look, not the specific steps on how to get there. Abstraction layers (management of files/users/packages/services/etc.) provided by the CM tool shield you from a nightmare of permutations, corner cases, and platform-specific options; it'll just enforce a given configuration regardless of local changes.
Having a central source of truth for systems configuration is a gamechanger in itself; your configuration directives or applications can query (or update) your configuration management database, which enables some very cool automation with very little effort. Then there's the community: for any given stack, there's probably a well-documented, well-tested Chef cookbook or Puppet manifest to build it, instantly plugging you into a rich experience base.
Yes, one could script all of this from scratch, but I'm not sure why one would.
Another advantage of containers is that deployment can be made atomic: either container A is running, or container B. You can't end up with a half-finished upgrade which leaves your server in an undefined, broken state. That property becomes very important when you deploy to a large number of servers.