I have often made this argument in the context of sandboxing communications software like browsers and e-mail clients, where it is relatively unusual to need access to local files except for their own data. In that context, restricting access to other parts of the filesystem unless explicitly approved would be a useful defence against security vulnerabilities being exploited by data from remote sources. It’s hard to encrypt someone’s data and hold it for ransom or to upload sensitive documents if your malware-infected process gets killed the moment it starts poking around where it has no business being.
More generally, I see no reason that we shouldn’t limit applications’ access to any system by default, following the basic security principle of least privilege. We have useful access control lists based on concepts of ownership by users and groups and reserving different parts of the filesystem for different people. Why can’t we also have something analogous where different files or other system resources are only accessible to applications that have been approved for that access?
I think OS X (and mobile app development in general) shows both that this is great in theory and a net improvement over not having it, but that there are some common pitfalls to address.
First, there are a handful of apps where this model doesn't work so well -- e.g. text editors, FTP clients, etc. So you're inconveniencing quite a few legit apps which need broader access.
Second, as a corollary of the first, that means you're going to have a lot of apps that legitimately need to ask users to approve broader access. And as the number of apps asking for approval goes up, the more likely users are to simply ignore the warning and approve all. This is especially problematic since we can assume the average user is a good judge of which apps need which access.
Edit: One way of reducing user acceptance fatigue might to introduce greater granularity into the requested permissions and then tier the permissions requested -- e.g. commonly asked vs. uncommon. E.g. an app may legitimately need permission to write to any file in your home directory, but it's highly unlikely they'll need permission to write to more than X number of files per second. Or at least they shouldn't be able to do so without the OS throwing up lots of warnings outside of the app.
For files and folders the user cares and knows about (documents, projects, etc), this shouldn't be a problem. For files the user doesn't care about (caches, configuration), you can just leave them in your sandboxed container.
[1] https://developer.apple.com/library/mac/documentation/Securi...
As far as Apple's implementation goes, sandboxes are for kids, not adults that need to get work done.
The truth is that they are too hard for even your average Sysadmin to configure & manage, let alone your average desktop user.
setenforce=1 (yeah, right).
I’d like to see the industry moving in that general direction, though. Even a much simpler model could bring real benefits relative to the status quo, where in application terms our current security model is analogous to everything being root.
However, applying program-level access control is very un-UNIX. How do you compose multiple programs with different security regimes? This bug happened because the "steam" program called the "rm" program via the "shell" program. Inheriting capabilities mostly solves this, but we're familiar with how hard selinux is to use as a result and it still doesn't save the user from command line typos.
I think it's time to make a stronger case for time-reversible filesystems. Accidental deletion matters less if you can just get in your time machine.
The real issue lies in the fact that the file system resides in a global namespace, when it shouldn't. Much like each process has its own environment variables, so should it have its own namespace. Linux does support so-called "mount namespaces" now, but once again they're not inherent parts of the system, but have to be tacked on through explicit unshares, and thus lose the cohesiveness of platforms such as Plan 9. [1]
Something like Windows Store apps, but which ideally wouldn't require the use of a specific store? (Only Entreprise apps can bypass the store from what I know)
Sure, this Steam thing sucks but your disk could die any moment; be prepared. RAID is not backup
# figure out the absolute path to the script being run a bit
# non-obvious, the ${0%/*} pulls the path out of $0, cd's into the
# specified directory, then uses $PWD to figure out where that
# directory lives - and all this in a subshell, so we don't affect
# $PWD
STEAMROOT="$(cd "${0%/*}" && echo $PWD)"
[...]
# Scary!
rm -rf "$STEAMROOT/"*
The programmer knew the danger and did nothing but write the "Scary!" comment. Sad, but all-too-familiar.In the pre and post install deb package scripts there was all kinds of crazy shit, like upgrading grub to grub 2 and manually messing with boot sectors. All this stuff in packages innocuously named modbus_driver.deb or what have you, and all in absolutely the most archaic bash syntax possible. I did suggest, strongly to jail all application binarys that we twiddle around with, with something like chroot, but was rebuffed.
Eventually somebody mixed a rm -rf /bin/* with rm -rf / bin/*, and the rest is history. They bricked about 100 embedded PC's, all in remote locations, all powering powerplant SCADA systems that did stuff like connect to CalISO for grid management or collect billing information. It cost hundreds of thousand of dollars to fix.
if [ -z "$STEAMROOT" ]
# something isnt right... STEAMROOT=$SOME_OTHER_UNSET_VARIABLE/
"rm -r " is a code smell, as much as "cc -o myprog .c" is. You should always know what files make up your system, and track them in a MANIFEST file. There's rarely a good reason to use wildcards when a program is dealing with its own files. xargs rm -df -- < MANIFEST
fixes this.No. "${STEAMROOT}" will contain something when the "rm" command runs. It just might not be what's expected.
> STEAMROOT="$(cd "${0%/*}" && echo $PWD)"
In the pre and post install deb package scripts there was all kinds of crazy shit, like upgrading grub to grub 2 and manually messing with boot sectors. All this stuff in packages innocuously named modbus_driver.deb or what have you, and all in absolutely the most archaic bash syntax possible.
Eventually somebody mixed a rm -rf /bin/* with rm -rf / bin/*, and the rest is history. They bricked about 100 embedded PC's, all in remote locations, all powering powerplant SCADA systems that did stuff like connect to CalISO for grid management or collect billing information. It cost hundreds of thousand of dollars to fix.
SCRIPT="$BASH_SOURCE"
SCRIPT_DIR="$(dirname "$BASH_SOURCE")"Without the slash, an empty variable would result in a command line of "rf -rf" which would simply fail due to the missing argument.
There is absolutely no need for having a trailing slash, it's not as if "rf -rf foo" and "rm -rf foo/" can ever mean two different things, there can be only one "foo" in the file system after all.
Very interesting way of introducing an epic fail with a single character, that really looks harmless.
The product/update is hyped and the release date is set in stone. Tensions are high and your boss has already let you know that you're on thin ice and not delivering on the project goals.
A last-minute showstopper bug comes in, caused by file leaks. Everyone is scrambling, and the file belongs to you so its on you to fix it alone. There is no time for code review, and delaying isn't an option (so says management). "I'm afraid if we keep seeing these delays in your components, we might have to consider rehiring your position".
The rm rf works -- it's a little bit scary, but it works. You write a test case, and everything passes. Still, you add the "scary" line for good measure. You have two more bugs for fix today and you'll be lucky if you're home by midnight and see your wife or kids. You've been stuck in the office and haven't seen them in days.
Are you an "idiot", "talentless" engineer that "deserves to have his software engineering license permanently revoked"? How do you know this wasn't the genesis of this line of code?
If a doctor accidentally removes the wrong organ because administrators have overscheduled him, "whoopsie, not my fault" is not the appropriate answer. The same applies to engineers working on bridges. Professionals take responsibility for their working conditions.
There is an enormous shortage of programmers right now. Anybody shipping stuff that is bad or dangerous is choosing that. If we drop our professional standards the moment a boss makes a sadface, then we're not professionals.
If they have not internalized the consequences of the risks they're asking their subordinates to take, they'll weigh what look like vague misgivings about "mumble, should be better, dangerous, blah blah" against the better understood risk of their bonus disappearing if the product doesn't ship on time.
Even if you choose to sacrifice yourself, your reputation, and your future prospects--again, if almost no employer in your industry would value what you call professionalism over short-term profits--someone else will ship the code you wouldn't.
That isn't a defense of anything; it's just a fact. Taboos (e.g. against bad code) don't work if they're not shared by the majority.
Incidentally, that sort of thing does happen sometimes.
http://www.cnn.com/2010/HEALTH/10/18/health.surgery.mixups.c...
I'd rather take the first option. At least my reputation will still be somewhat intact.
And as a bonus, I can get out of that hostile environment earlier.
I'd take the same option, but for different reasons: the customer's data. Only pictures of e.g. a deceased wife, no backup, and you just deleted them. You can arm-wave all you like about backing up, but you deleted them.
This is the reality of corporate development. I think we should be demanding more before buying software from vendors, especially when you can't just whip out the source code to audit or fix yourself.
Suff like this doesn't have to happen, but we let it.
Valve doesn't have bosses, remember?
Pushing in shitty broken means you're not doing your job. If your company forces you to do this then they are not doing their job.
When I tried on Linux, it threw a permission error. Turns out Steam installs the folder "~/.local/share/Steam/userdata/[userid]/config/grid" without the executable permission bit. Without this bit no one can create files in there, Steam included, and the custom images feature gets broken.
I reported the problem, saying they should fix their installer, and got a "have you tried reinstalling it?" spiel. When I said I did, and manually changing the permission fixes the problem, so it must really be it, they closed the ticket with "I am glad that the issue is resolved".
This was a ticket at support.steampowered.com, because I didn't know Valve had a github account. I would open an issue there, but I don't have a linux installation to test again and this sort of misunderstanding burns a lot of goodwill.
To be fair to them, when the six-sigma of your customer service is 15 year olds asking to be unbanned cause they totally didn't use hacks, your CS probably gets a bit desensitised.
I know the morale here is keep your files backed up but come on, this is a ridiculous issue Valve still hasn't fixed.
Also: backups. I know it sounds cliche, but look, if it has a mechanical hard drive, the manufacturer could have slightly mis-calibrated one of the mechanical assemblies, and this could have happened because the nature of digital storage is that it is essentially ephemeral.
Protect yourself from things outside your control. You don't need the most sophisticated solution, just an external usb drive.
For this reason, the recommened way to use things like rsnapshot is to have your backup directories owned by root and with permissions masked to something like rwxr--r--. If you then want to read your backups easily, you do things like mount it under NFS as read-only.
Just ARQ and an AWS Account [or google drive, or dreamobjects, or SFTP, or...].
Your backup is client side encrypted and completely painless. Take half an hour to set it up and then just forget it till it saves your ass.
Not affiliated, just a fan: http://www.haystacksoftware.com/arq/
The other problem here is the concept that commercial software should be held to some higher standard of liability than noncommercial software. If some random group of strangers build a bridge, which then collapses and kills someone, they're still very much open to lawsuits. So far, the programmers of the world have fended this off by shouting in all-caps about warranties express or implied -- but eventually (I hope) the world will get sick of their shit and hold them accountable for failure. When that happens, whether or not the software is sold should have nothing to do with damage liability (outside of any sale contracts that my apply).
A man can dream.
rm -rf "$STEAMROOT/"*
This is why serious bash scripts use set -u # trap uses of unset variables
Won't help with deliberately blank ones, of course.Scripting languages in which all variables are defined if you so much as breathe their names are such a scourge ...
I did this once in a build script. It wiped out all of /usr/lib. Of course, it was running as root! That machine was saved by a sysadmin who had a similar installation; we copied some libs from his machine and everything was cool.
set -e # exit on unchecked failure
That way you don't trudge forward through an untested code path after a failure, you stop there and then.set -o nounset # set -u
set -o errexit # set -e
set -o pipefail # I'm not sure this has a short form.
I use the long forms for the option names because they're self documenting.
If anyone has more suggestions, I'm all ears.
rm --preserve-root -rf "$STEAMROOT/"*
would have worked too.All system architects ever:
1) System data are sacred, we must build a secure system of privileges disallowing anyone to touch them
2) User data are completely disposable, any user's program can delete anything.
All users ever:
1) What? I can reinstall that thing in 20 minutes, there's like 100 million copies worldwide of these files.
2) These are my unique and precious files worth years of work, no one can touch them without my permission!
Makes me wonder, is there a tool, system, service for auditing how many 'pair of eyes' have reviewed a given line of code. This would be hard to determine, but could be useful. I am envisioning a heatmap bar or overlay that indicates the number of reviews a line of code has received.
It certainly seems that way. CSGO (their FPS) is notorious for updates that break things, like very recently a gun having less ammo than it should, or masks (halloween thing, I think) being rendered through smoke (a crucial feature of the game).
Long ago I had a kernel hack that would kill any process that attempted to delete a canary file. Worked OK but no chance of it ever going mainstream.
I like the 3-2-1 rule:
At least three copies,
In two different formats,
with one of those copies off-site.
Software is written by humans who will undoubtably miss a corner case and not think of every possible environment.Granted, part of the blame lies in the archaic Unix security model which doesn't sandbox applications. But ANY line containing "rm -rf" should be reviewed by the most senior dev in the company, or at least one who actually understands shell scripting. It has such a terrible failure mode, there's no excuse not to. (Especially when the dev to blame knew that it's "Scary!".)
"If it doesn't exist in at least three places, it doesn't actually exist"
- Allan Jude - TechSnap
What does this mean?
The transition from meware to software is a hard one - and usually it's how we get terrible reputations as an industry. Basically it's a prototype till it's burned enough beta users.
Edit OMG - that is actually Steam from valve - I take it back - this is supposed to be software.
# Scary!
rm -rf "$STEAMROOT/"*
Anybody who writes a line like this deserves their software engineer license revoked. This isn't the first time I've seen shit like this (I've seen it in scripts expected to be run as root, no less); it makes my blood boil.Seriously. "xargs rm -df -- < MANIFEST" is not that hard.
EDIT: I shouldn't be so harsh, if it weren't for the comment admitting knowing how poor an idea this line is.
Of course once it was a identified a concerted effort could have been made to rectify the situation.
Sometimes a problem is identified working on some unrelated aspect of the software and the developer does not have time or scope to change the offending piece of code but wants to place a red flag.
Of course this example does not appear to be systematic approach to marking "fixme", unless of course their fixme tag is actually #scary.
Edit: Of course one would hope they have raised a rather high priority work item for this.
The issue is that this particular script did not set up $STEAMROOT correctly.
Sure, you could somehow check that STEAMROOT is set to something resembling what you want to delete. But manifests are much more simple to get right.
EDIT: Alternatively, pick a well-known UUID, and put everything under a directory with that name under STEAMROOT. Then, to remove:
rm --one-file-system -rf "$STEAMROOT/d0a8936d-1faf-4b82-aac9-e5f104432b24"
This won't do the wrong thing, as it will require that unique string to be present in the pathname. (--one-file-system for good measure.)(Don't put the UUID in a variable, or you're back to square one of having a potential bug!)
rm -rf $the_dir && mkdir $the_dir
Yes, it's technically slower, but this is shell.Including my 3tb external drive I back everything up to that was mounted under /media.
Well maybe it was just unfortunate and the drive just happened to be mounted or the "backup" is always online. If it is the latter it is a really bad idea. If your computer is compromised you risk all of your backup. A proper backup should protect your data from these occasions.
rm -rf $1/$2
Under certain kinds of bad input, the function that had this line would be called without any arguments -- and this (like the bug here) would become into "rm -rf /".This horrible, horrible bug lay in wait, until one day the compiler group shipped a patch that looked, felt and smelled like an OS patch that one would add with patchadd(1M) -- but it was in fact a tarball that needed to be applied with tar(1). One of the first systems administrators to download this patch (naturally) tried to apply it with patchadd(1M), and fell into the error case above. She had applied this on her local workstation before attempting it anywhere else, and as her machine started to rumble, she naturally assumed that the patch was busily being applied, and stepped away for a cup of coffee. You can only imagine the feeling that she must have had when she returned to a system to find that patchadd(1M) was complaining about not being able to remove certain device nodes and, most peculiarly, not being able to remove remote filesystems (!). Yes, "rm -rf /" will destroy your entire network if you let it -- and you can only imagine the administrator's reaction as it dawned on her that this was blowing away her system.
Back at Sun, we were obviously horrified to hear of this. We fixed the bug (though the engineer who introduced it did try for about a second and a half to defend it), and then had a broader discussion: why the hell does the system allow itself to be blown away with "rm -rf /"?! A self-destruct button really doesn't make sense, especially when it could so easily be mistakenly pressed by a shell script.
So we resolved to make "rm -rf /" error out, and we were getting the wheels turning on this when our representative to the standards bodies got wind of our effort. He pointed out that we couldn't simply do this -- that if the user asked for a recursive remove of the root directory, that's what we had to do. It's a tribute to the engineer who picked this up that he refused to be daunted by this, and he read the standard very closely. The standard says a couple of key things:
1. If an rm(1) implies the removal of multiple files, the order of that removal is undefined
2. If an rm(1) implies the removal of multiple files, and a removal of one of those files fails, the behavior with respect to the other files is undefined (that is, maybe they're removed, maybe they're not -- the whole command fails.
3. It's always illegal to remove the current directory.
You might be able to imagine where we went with this: because "rm -rf /" always implies a removal of the current directory which will always fail, we "defined" our implementation to attempt this removal "first" and fail the entire operation if (when) it "failed".
The net of it is that "rm -rf /" fails explicitly on Solaris and its modern derivatives (illumos, SmartOS, OmniOS, etc.):
# uname -a
SunOS headnode 5.11 joyent_20150113T200918Z i86pc i386 i86pc
# rm -rf /
rm of / is not allowed
May every OS everywhere make the same improvement!Bryan--you mean Star Trek didn't get it right? Well, at least they didn't allow shell scripts.
It's an interesting philosophical question though. At what point do you decide that the user truly really can't want to do this even though they've said that they do?
A real self-destruct button would ensure you couldn't recover the data anymore.
So at least try "dd if=/dev/random of=/dev/sda" or more elegant when available, throw away the decryption key.
"set -o errexit -o nounset -o pipefail"
It'll save you headaches.
The last option is unfortunately harder to use, since some programs misbehave in pipelines.
set -o nounset (set -u): exit script when it tries to use undeclared variables
set -o pipefail: returns error from pipe `|` if any of the commands in the pipe fail (normally just returns an error if the last fails)
To get the same effect as the "set -o errexit -o nounset", I think you can use "#!/bin/bash -e -u". (There seems to be no option for pipefail.)
bash '-e -u' $file
But you can do #!/bin/bash -eu #!/bin/bash
set -eu
IFS=$'\n\t'
(Wish there was a shorthand version of pipefail, then I'd always use that too.) $(cd "${0%/*}")And yes, dirname is a way out of this. I'd do this:
"$(cd "$(dirname "$0")"; pwd)"
if I wanted the path to the script. I would also sanity-check the path by testing for the existence of some files or directories that are expected to exist under it, before trying to delete it all."the ${0%/*} pulls the path out of $0, cd's into the specified directory, then uses $PWD to figure out where that directory lives - and all this in a subshell, so we don't affect $PWD"
So for example, that would take /user/amputect/seam_setup.sh and change directories into into /user/amputect.
Since $0 is usually the name of the shellscript itself, this would be trying to obtain the directory path of the shellscript.
Window managers don't make it any easier, and I put a lot of the blame on them for not making it easy to configure applications to start under different users.
One of the users in the Github thread even mentions how SELinux prevented the same thing from happening on his machine.
Spoiler: due to a forgotten space the entire /usr folder was deleted
Part of the scripts installed a bunch of files into what was supposed to be a fakeroot, however I did not have bash's 'set -u' configured and an incorrectly spelled path variable was null, meaning something like: "${FAKEROOT}/etc" was translated into "/etc". Before I realized it, it had clobbered most of my /etc directory.
When the build failed, I was puzzled. I only noticed there was an issue when I opened a new shell and instead of seeing "myuser@host ~]#" I got "noname@unknown ~]#". Uh oh...
Needless to say I know do my development of those scripts from within a VM.
http://www.penny-arcade.com/comic/1999/01/06
Basically, they called delete tree. If the user installed or moved the game to a different location, say, the root, it would delete tree away somebody's whole computer. Fun times.
So if you notice something like this happening, shut down computer asap, so they can't be overwritten. Plug drive into another computer but do not mount it. Instead run some file recovery program on it.
For an SSD it becomes murkier though, what with their trimming and automatic garbage collection.
It's not like python isn't available on every major linux distro. It's a little harder to ensure it's on windows, but when Steam is installing every point release of the Visual C++ runtime that has ever existed on my system, why not bundle python in there too?
Because admins gotta admin. Also, adding python to a software suite just to work on the file system... gross.