Modern package management systems like APT spend a lot of effort installing and removing files, and they don't do it completely; any file created by a program after it is installed will not be tracked.
You could accomplish the same thing in other ways (as Apple's sandboxing tech does), of course.
Well, there /is/ another way to do it.
STATIC LINK ALL THE THINGS
Which would work if licenses and copyrights didn't exist.
We're at the point in the hype cycle where it starts getting fashionable to dismiss that as an overkill, but the reality for most of us out there is that most software is way more complicated than a single executable and containers make it easier to deploy complicated software.
Managing internal dependencies (like libraries) is another concern entirely. But containers are good for that, too.
I don't think it would.
Dynamic linking allows a library to be patched once and have the patch apply to all the programs using it. If every program was statically linked, you would have to update each one individually.
Not to mention the waste of space.
I'm guessing much of that is moot these days, but IMHO it's still something to aim for.
On the other hand, deploying stuff is dead easy now. You just tell the hoster "deploy this Docker package, expose port X as HTTP, and put a SSL offloader in the front" and that was it, no more "we need <insert long list> to deploy this" or countless hours spent with the hoster on how to get weird-framework-x to behave correctly.
If it's actually useful we'll find someplace for it to live (or just in containers if that's all that's needed). If not, finding that out was cheap.
I just sometimes wonder if the cart hasn't gotten in front of the horse on the whole container front. The fact that the term "bare iron" has been hijacked to mean "not under virtualization" made me start to think that something is seriously fucked up.
That is a complex mix of services running on a machine, some sharing the same flavor of VM, and you're allocating fractions of the total available resource capability to different components. If you cannot make hard allocations that are enforced by the kernel you cannot accurately reason about how the system will perform under load, and if one of your missions is to get your total system utilization to be quite high, you have to run things at the edge.
Agreed. What I find so odd is that this problem was simply and elegantly solved in BSD with 'jail' and everyone went about their business.
I do not understand why (what appears to be) the linux answer to 'jail' is so complicated and fraught and the subject of so much discussion.
I am not sure that containers and their build scripts represent the $huge_profit_potential that people think they do ...