The main difference (besides overhead) is that a VM contains a running collection of OS services along with your app. Because of this, the ops team needs to be involved in keeping the VM up to date, the VM usually needs to be "managed" with orchestration much like a physical machine, and all-in-all it's big interdependent mess where the devs can't just forget about deploy-time concerns.
Idiomatic usage of containers forces one particular solution for this: a container contains one service; multiple services means multiple containers, and the orchestration of those containers is up to the ops people and their software.
VMs can also be done this way. EC2 ephemeral instances work great for doing a CoreOS-like "upgrade by starting up new AMIs instances and killing the old ones" strategy.
However, since ops people can't be guaranteed that random VMs they're handed do not, in fact, have arbitrarily-old services running in them with possible security vulnerabilities, they have to be conservative about deploying random VMs created by devs. Thus, VMs don't tend to get created by devs; thus, the devs still have to solve the deploy-time problem some other way to get the ops people something they can build into a VM. This isn't as much of a problem with containers.
Unikernel VMs, on the other hand, are effectively equivalent to containers: they both provide "just one service in a sandbox"-level granularity, that ops can then manage as it pleases. If Unikernels had come around 10 years ago—if Linux had been factored into a rump kernel, for example—I don't think we'd be nearly as interested in containers today.