The article tries to pitch read only Docker images some kind of solution, but running your applications read only (and what other permissions you grant your application) has nothing to do with Docker images. Using file system and process namespaces for application isolation is a good idea.
But lately it seems to be getting more popular to drop untrusted applications in containers (possibly even under outside control and full of who-knows-what) as if that somehow solves "security". The thing is, you still need to be able to reason about what permissions your respective application requires, there's no getting away from that.
Don't dismiss this concept. It's a perfectly valid approach in some scenarios.
I would also add that attackers are actually after the data. Exploiting application vulnerabilities is just a mean to that end so bringing back exploitable application from the previous image is a BAD idea.
Having said that previous image can be a good starting point to patching up the vulnerabilities and bringing the application online.
Not strictly true - depending on the parties it's still a very desirable goal to be able to snoop on a site's continued operations, inject some code into visitors' pages, or just have a platform for further attacks.
Attacking a system that disappears and restarts on a regular basis is a nightmare for current attackers; it's not something that they have the tooling to deal with yet.
I believe some of these ideas were discussed by Dino Dai Zovi in a talk[1] he did which combined a whole bunch of rather out of the box ideas on defense.
Also, the author is arguing fix now but preserve all information for fault diagnosis later. For a lot of problems that seems like exactly the right choice.
Are there any open source solutions that already take advantage of features like this? Or are those mostly kept secret for security and business reasons at this time?
> Until we fix this RCE vulnerability, the attacker will
> still be able to execute code on our host [...]
With Docker, it seems to me like we're moving closer and closer to the server being an executable of its own, but with the necessary Linux kernel bits compiled in such that it can execute on (virtualized) hardware.I'm wondering how far we can take this. The ability to execute code on the host is there because that's what Linux does, but what if we removed this interface, and replaced it with an interface that just accepts one or more ELF binaries at compile-time? Then these would become the only Linux executables that this kernel can execute.
As far as I can see, we could do the same to system calls: if an executable can enumerate all the system calls it needs, we can compile a kernel that will accept only these system calls, which should be a small subset of all available Linux syscalls.
> Unikernels are specialised, single address space machine images constructed by using library operating systems. A developer selects, from a modular stack, the minimal set of libraries which correspond to the OS constructs required for their application to run. These libraries are then compiled with the application and configuration code to build sealed, fixed-purpose images (unikernels) which run directly on a hypervisor or hardware without an intervening OS such as Linux or Windows.
That is what pledge essentially does at runtime
---
You want to make your way in the CS field? Simple. Calculate rough time of amnesia (hell, 10 years is plenty, probably 10 months is plenty), go to the dusty archives, dig out something fun, and go for it. It’s worked for many people, and it can work for you.
— Ron MinnichKeep in mind that todays server hardware traces its lineage back to IBMs response to microcomputers like the AppleII and C64.
Agreed. However as long as you don't look at it as your primary line of defense, it increases the cost to an attacker. And that's currently the best we can ever do.
Reducing the attack surface is important, but if a running container is compromised it's imperative a post-mortem is performed immediately - and the issue remediated - to prevent re-exploitation.
If your data gets stolen and your website defaced what's the use of immutability? I mean you can always run a diff tool against the current code on the server with the code you have on your repo right?
Because it forces the attacker to write a specific payload for your service. Standard, reused "drop shell.php and register IP" will not work anymore. And realistically if the target of the attack was a WordPress installation, it will likely be a trivial, automated script.
> Cant you do the same thing at the OS level already?
Yes, you can. Even better, split execution privileges from file privileges, then make it read only, then put a grsec/apparmor/selinux profile on the service. It's not docker specific, but docker does make read only service a little bit easier.
> Wouldn't making the dir read only do the same thing?
Yeah, but who would do that old school thing. Docker security! :-(
And it's not just about diffing your code with your repo - that only works if the attacker tried to attack your code. What about other running processes? New files on the system containing malicious code, outside of the paths you usually deploy code to? what about new, unexpected cron jobs?
Overall, it could become a pretty complex job. A filesystem with some intrinsic snapshotting makes this a lot easier.