Incidentally, this is also not very efficient: UNIX permissions as they are today require 9 bits, namely rwx for owner, rwx for group, and rwx for others. But in an alternative universe where owner's rights win over group's rights which win over others' rights, permissions could be coded in just 6 bits: 2 to express who can read, 2 for who can write, and 2 for who can execute. Each set of 2 bits would be interpreted this way: 00=nobody, 01=only owner, 10=group or owner, 11=everybody.
> It's counterintuitive that the owner can have less rights than the others
I completely concur. I've also never seen it used in the wild, but I know about it because I stumbled upon it more than once building scripts and not being careful about what flags are set.
Why "alas"? I'm comparing apples with apples: UNIX base permissions vs alternative-universe base permissions. If you want to add the 3 flags (sticky, setuid and setgid), you can add 3 bits to both sides of the comparison.
Interestingly, I have come across some people who confuse the sticky bit with the setuid bit.
Indeed it's funny that you can `sudo chown root regularfile` and you'll then be able to read it, since the group now applies not the user.
That fails for setuid files but that's a setuid thing not an x thing. It also fails I guess for executables that check argv[0] but probably not important.
You can test this in Bash: userA does cat>/tmp/newfile (assuming a chmod or relaxed umask so /tmp/newfile is created with permissions 0664), userA types in lines of text every few seconds, userB does tail -f /tmp/newfile and watches lines appear, then userA does chmod 600 /tmp/newfile, but userB can continue to tail -f /tmp/newfile and watch lines appear.
That said, Linux gets this right, and Tanenbaum gets it wrong. Preserving access rights requested on open makes the system easier to reason about.
suid is to run things as another user without passwords. Mostly used for root access today and ignored for anything else. I personally think that's a missed oportunity when they added the unshare/namespace/capdrop stuff... would have been so nice if the interface to containers was a freaking simple 'suid as this lowly user' for a userland api. anyway.
and guid ON DIRECTORIES, are so that users can save files in a groups that then others can also update. So you can have `/srv/http/htdocs userA webmasters -rwxswx---`
then there's umask which may help or get in the way. and getfacl et al.
overall it's a mess that covers many usecases after you've been initiated.
Turns out it doesn't seem possible. Even if you use ACLs, whatever default ACL you set can just be removed from sub-directories by their respective owners. This seems like a big blind spot, unless I just missed something; all those groups, access lists, bits, and I can't even do that?
Relatedly, and possibly helps you implement half of the scheme. You can make a dropbox[0] style directory by removing the search (x) attribute and having some program continuously scan and rename dropped files to some random string.
[0] dropbox in the traditional meaning of course, not the cloud storage
Edit: not sure why I'm getting downvoted. Is it that offensive to question orthodoxy?
Groups are a bit more niche IMO, but without groups there is no real other way to express the constraint of thing X uses file A,B, thing Y uses file B,C - how can they share the data without making things globally accessible or duplicating it. That’s probably a less frequent occurrence, but does come up (but again more on servers than desktop).
And of course if one has a family then one might want accounts for Mom, Dad, Alice and Bob.
You can no longer trust 3rd party applications to stay in their lanes. Running an application with full access to everything that I as a user have access to seems insane in 2024. Ideally, I don't want a third party application to read or write anything outside of its "home directory" without my explicitly giving it permission. That includes files on my filesystem, network shares, hardware devices, everything.
Which, as others have pointed out, means various system services running as other users (since you don't want them running as your user, and you also don't want them running as root). On most desktop unix machines that only one person uses, that's the main use case for multiple users (and for multiple groups since groups are used to manage access to various functions like printing, usb sticks, cd-roms, etc.).
Groups are a bit less useful (IMO), but still good for handing out access to things like device files. If a daemon should have permissions to XYZ /dev file then you add them to the group associated with it.
The most obvious reason for this is so that a security problem in one of these daemons won't be able to read your Firefox cookies, install a rootkit, and stuff like that.
I know this can be done in Linux using flat packs, snaps, and the like, but I would really appreciate if sandboxing could be done at a more fine grained level, without coupling sandboxing and distribution.
I do, and not just for system services as mentioned by others.
I have separate user accounts for general desktop use, gaming, software builds, software testing, and a variety of containers.
Isolation is useful.
Somehow it slipped in for phones and that’s a big part of why they suck. E.g. you can’t have work, life, private/second life and tmp/trash accounts on your phone and have to either carry multiple devices or mix lives together.
I find them valuable. For example, I have a workstation that is used for different projects with different clients, as well as administrative work for my own business. I want 100% separation between assets related to those different contexts.
It’s bad enough that we have package managers allowing package installation scripts to run arbitrary code, or software wanting you to install via:
curl https://example.com/imnotmalwareipromise.sh | sh
I’ve seen people seriously make the argument that if your entire system gets nuked by malware through these installation methods then this is entirely your fault. That’s obviously an absurd victim-blaming stance, but the fact is that the risk still exists with modern software development systems.At least if I have separate users for each client or each major project then the worst that is going to be compromised by a vulnerability introduced during the work for that client or project is that same work.
It’s not just about security though. It’s also about convenience and manageability. Those different clients and projects frequently require the use of specific security credentials and configurations, often for remote services that other clients/projects also use. In a perfect world, I’d like all of the software I use to be XDG-friendly, and I’d like each client/project to have its own home directory with its own independent XDG-style directories underneath, so each user has the configurations and credentials required for its own work and has no knowledge of or access to those of any other user. Finished a project? Archive/nuke that entire user and home directory as appropriate, and nothing is left lying around to break anything or leak anywhere later.
I’m currently playing with NixOS, which means I can also have a limited set of system-wide software installed and have specific additional packages installed per-user or even activated on demand when I change into a specific directory. Again, this means my system has only the software I actually need available at any given time, at the exact version I need for that specific work, and if something is no longer needed by anything I’m doing then it will automatically get cleaned up next time I do an update/rebuild.
None of this really works without the concept of separate users running different software in their own isolated little worlds, possibly concurrently on the same workstation and even sharing the same input/output devices (in a safe way where again they can’t unreasonably interfere with each other – something else that is not 100% there yet, but certainly a lot better than on de facto single-human-user operating systems). The only real alternative is to spin up something like a different virtual machine for each client/project where everything from the OS down is isolated, but I don’t really gain anything by doing that and it’s potentially more work to set up and more difficult to share input/output devices.