Also, a bigger problem than system calls is what parts of the file system the program can access. The concept that a program has all the privileges of its owner is the biggest single problem with permissions.
What might work is having a few general classes of programs, with appropriate restrictions. Consider, for example, permission set "game, single player":
- Can read anything in its install package. - Can read/write only to working directory associated with product/user combination. - Can go full screen, use audio output (not microphone), access mouse/touch, etc.
That seems reasonable. Angry Birds could run under those restrictions. For some games, the DRM won't work, the anti-cheating won't work, the ads won't work, the in-game purchasing won't work, the updater won't work, and the social leader board won't work. Still, it would be reasonable to require in an app store that games still work locked down to that level, even if some features are disabled.
One way an app store might make this work is that programs which require very limited permissions are easy to get into the store. Programs which require extensive permissions go into the "adults only" section of the store, or have to go through a source code audit at the developer's expense.
Eventually we will get it right, because Theo is right, normal people will just disable security, because it is an hassle on their eyes.
One just needs to search the online forums for people asking how to run as root on Mac OS X or Windows.
The people are not at fault, and they don't know about these things. When the security is bad, it's a design problem - therefore, it's the system/platform/app developer's fault (although app developers are much less at fault than the platform developers, since they can also only control what's given to them by the platform vendor).
You may want to take a closer look at the slides about what has been pledged so far. httpd, smtpd, ntpd, relayd, slowcgi, xterm... They're not quite emacs, but they're also not just command line tools.
I don't think this is a permission set so much as it should be a security domain. At the moment we have the (user,group) tuple. The lesson of mobile OSs is that this needs to be (application,user,group) or possibly (vendor,user,group) - because the vendor/application developer is a potentially hostile actor.
Each app having its own "home" directory eliminates so many problems. It gives you a new problem, which is that apps are no longer composable. The solution to that is probably to put the work of choosing which applications are allowed to open which files back into the Finder/Explorer part of the system (which would be able to see everywhere) and let it do the opening.
Also on Android the thread model is malicious Apps, while on OpenBSD it's trustable executables getting bugs exploited. Malicious Apps can trick with the privileges, exploits have to fight with the privileges that are already set.
http://www.openbsd.org/papers/hackfest2015-pledge/mgp00032.h...
I see problem that if we allow application to query its capabilites and bail out if flashlight cannot use microphone it kind of defeats the purpose of privileges. The way to go is to silently fail calls and have an API allowing to deal with that easily. The same microphone example - flashlight app could simply get silence from mic stream (makes harder to debug failing legit uses) or simply fail on attempts to open mic stream.
I'm not impressed by De Raadt's objection to seccomp. BPF programs may technically be turing-complete, but most of the things pledge() does can be implemented by a pretty simple seccomp filter that's just a flat list of conditionals implementing a whitelist or blacklist.
Meanwhile De Raadt points out, correctly, that voluntary security mechanisms will be ignored by most developers... but pledge() appears to be voluntary.
Seccomp-bpf is often used by sandboxes like Chrome or Sandstorm.io (of which I am lead developer), where it is not voluntary for the code that ends up being run inside the sandbox. But sandbox developers are likely to want seccomp's customizeability over pledge's ease-of-use.
So while it's nice that that pledge() is so easy to use, it strikes me that it's targeting the wrong audience with that design.
But De Raadt's own slide 5 makes a convincing case that "optional security is irrelevant", and he dismisses SE Linux on that basis. I don't see why the same doesn't apply to pledge.
Don't get me wrong, I think it's great for this to be available and I would like to see a similar, easy-to-use seccomp wrapper available on Linux. But, sadly, app developers aren't likely to use it.
As they've used it so far it's not that voluntary for the user. When De Raadt says voluntary mitigations don't work, he's talking about mitigations that a sysadmin can easily disable via settings.
Unless developers build options to control it at runtime then in practice pledge() is a lot less voluntary than SE Linux which has a knob to enable or disable system-wide.
So, the way I'd recommend doing the filename whitelist is by setting up a mount namespace. Create a tmpfs, create the necessary directory tree inside it, bind-mount each whitelisted path in the tmpfs to the real file, then pivot_root into the tmpfs. This sounds complicated but is actually not very much code, and again a library could make it easier.
But I think you could also do it with pure seccomp. The trick is to copy the filename list into memory pages that you subsequently mark read-only. Then, have your seccomp filter whitelist specifically pointers to those strings, and prohibit making the pages writable again.
(Disclaimer: I just came up with this on a whim, it probably needs more thought.)
https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux....
I think it's quite readable and one could modify it easily to whitelist open on "/dev/null".
I think this problem should be solved not by the developer: I pledge to only read files, but the user/administrator: you are only allowed to read files; and attempts to do that could either fail hard on opening the file in rw mode, or silently pipe data to /dev/null on writes.
As an added benefit this would teach programmers to actually expect calls to fail and easily test that by applying restrictions. In such environments a program could test its env during initialization and either refuse to start or try to work in a limited fashion. Blowing up during runtime is better than nothing, though still borderline acceptable. Yes, this helps catch misbehaving programs and more importantly highly helps mitigate exploits (exploit automagically runs in isolated jail).
Rachel by the Bay has awesome writing [1] on exactly this topic. Makes one think: how many of us have encountered failed fork()/malloc()/fopen()/execve() and are guilty of releasing buggy code, because "f it, highly unlikely, will fix later"?
Also the developers know if a program needs to write to files or not. Why even leave it as an administrator task to lock down the program, if we already knew that it will never, ever need to write to a file, unless it's being exploited?
Who should set up the permissions? If you trust the developers, you trust them to set up the permissions for you. However, what happens of you don't trust them? You need administrators and users to be able to tweak what the programs are allowed to do.
What is really great about doing it at the developer level is that the administrator do not need to think about that stuff.
For large programs like emacs, or python, it is unfortunate that you can't disable the privs just for a functional call for example. I'd like to see a discussion on why this wouldn't work? I guess there is a good reason - but they don't say.
If you "allow only these privs until Done" where Done would be defined by the program jumping back to a point set by pledge() call. If the process does not allow writing on executable memory this would work (at least for non jit processes).
I wonder if there are ways this can be used by python. Perhaps using a fork this could be done. The program forks to run just the priv bit of code. It can use pledge() and drop all the stuff it doesn't need - do it's business - then die.
Qmail is a good example of this philosophy: Many small binaries that isolate different functionality and are run as different users and mostly communicate via command line and pipes. It makes the attack surface small. But pledge() could have made it even smaller.
UNIX in general yes, Mac OS X and Windows not.
The problem as Theo points out, is getting developers to use it.
For example, Windows has fine grained security for all object handles in the system, but as many security exploits show, almost no one makes proper use of them.
I use the path argument as simple form of chroot(2). Previously I had to create a vnd (think loopback device if you are coming from linux) to chroot nicely. On code updates, some process had to rsync static assets into the chroot (I preload all of the needed perl, then chroot()). On linux, the same app uses containers/namespaces. Leveraging read only bind mounts for static assets, seccomp, and various prctrl fiddling. All that ends up being a few hundred lines of code. With pledge is really just a few lines to call the syscall. Much easier to reason about.
Even if you end up having to allow most syscalls, the path argument alone IMHO makes it worth it.
There were too many times that SELinux would cause issues for them because they didn't understand the built-in policies and where to place stuff. As a security conscious person, it is a HUGE pain in the behind and I've spent many hours debugging SELinux and it's policies.
I think selinux use has increased lately, at least from my perspective.
Where I work I am constantly forcing it on people and volunteer to solve any problems they might have to ease their transition.
Just like pledge would require passionate developers who actually care about implementing pledge on the application level, SElinux requires passionate sysadmins who actually care about using it, and about their co-workers using it.
Drawbridge classifies syscalls into groups and the syscalls an application is allowed to use is registered. When the application is executed a runtime gateway verifies that the application only uses the syscalls that was registered. Drawbridge does more things (generates a library that maps the 800+ syscalls to the group equivalent one etc.). But there are similar ideas.
I thought Drawbridge was neat, but seems not to have moved much beyond MSR.
A link to the pledge man page since I haven't seen it mentioned yet: http://www.openbsd.org/cgi-bin/man.cgi/OpenBSD-current/man2/...
Eventually even how the sandboxing works in Windows 8/10 for store applications.
I'm really looking forward to 5.9 if it includes pledge as well as vmm (native hypervisor)
Seriously lowered my view of the presentation.
Linus, in one of his less bright moments, called the OpenBSD team a bunch of masturbating monkeys [1]. Unfortunately, the Linux kernel's conspicuous lack of attack mitigation measures (compared to Win/Mac/OpenBSD/etc) does make one wonder who has been masturbating over the past few years.
[1] http://article.gmane.org/gmane.linux.kernel/706950 (from the article)
(to be clear: I like and use Linux a lot... but Linus's disregard for security is becoming a liability)
Article could have been good but is a bit too sensationalist, e.g. pointing out that Ashley Madison runs Linux only to admit that it had nothing to do with their security breach -- OK, so why did you mention it, then?
(In truth, kernel security rarely matters for servers, because the application is usually the first line of defense. I say this as someone who runs one of the rare services where kernel security does matter, so yeah, I wish Linux did more hardening, but the article is misleading.)
Not only that, Solaris allows you to wrap programs without any source modifications easily using ppriv to drop or add privileges as required.
Almost every slide in the presentation that talks about how you would use the proposed pledge() interface applies to Solaris' privileges model as well.
Some relevant examples:
http://www.kernelthread.com/publications/security/solaris.ht... http://docs.oracle.com/cd/E23823_01/html/816-4863/ch3priv-25...
Solaris' role-based access control and advanced privileges model even lets you implement things like only allowing someone to become 'root' if both them and another person logs in at the same time. Think of the "two-keys required to unlock this door" sort of approach to security:
https://blogs.oracle.com/gbrunett/entry/enforcing_a_two_man_...
Or applications that embed many utils into one?
There have been actual exploits (multiple ones!) in file(1), where a bug in parsing can result in arbitrary code execution. That's really a failing of the permissions model: file(1) is a program that does nothing but read a file and print a result, so buggy parsing code should have a failure mode no worse than either it crashing, or printing the wrong result. But as-is, since it has the full permissions of the user who ran it, it can do things like email someone your SSH keys, or delete your home directory, which is functionality the binary clearly doesn't need access to for legitimate operation.
For plugins, that gets complicated. That's a case where there would need to be some reworking of how plugins are called, perhaps breaking the program into a series of plugins connected by pipes, where each individual program has its small set of abilities. But that's just an off the top of my head guess, there could very well be a better way of doing it.
All the same, even if the app used it with most syscalls enabled, it would reduce the attack surface.
> formerly known as tame()
http://www.openbsd.org/papers/hackfest2015-pledge/mgp00002.h...
>- formerly known as tame()
OK, people clearly disagree - but I'm still not seeing it, so can someone please explain what's more optional about SELinux than OpenBSD? I mean, I'm not trying to make some kind of a gratuitous dig here, I'm trying to make a serious contribution to the discussion.
They're different kinds of mitigation mechanisms anyway, that could easily work together. plege() (and seccomp-bpf) are mitigations intended to be applied by the application author, of the "I know my IRC client should never call ptrace()" sort. SELinux is a mitigation intended to be applied by the system administrator, of the "I know my ETL loader job should only need to read files labelled with loader-input label, write to the directory labelled with the loader-temp label, and connect to the syslog and database sockets" sort.
You are invoking some pretty ridiclous semantics to dispute "optional".
Though they are a bit complex.
Now, the misspellings "priviledge" and "seperation" ...