* Arch Linux: https://wiki.archlinux.org/index.php/Arch_filesystem_hierarc...
* Fedora: https://fedoraproject.org/wiki/Features/UsrMove
* Debian: https://wiki.debian.org/UsrMerge
* Ubuntu: https://wiki.ubuntu.com/FoundationsTeam/Specs/Quantal/UsrMer...
And this is true for quite a long time. I think the last distro that did /usr merge is Debian and this is already 3 years ago. So I find surprising that people still thinks that this is a issue.
The only thing that remains is the difference between bin and sbin that Fedora/Ubuntu/Debian still maintain (Arch Linux simply merged everything).
P.S.: of course, there should be still distros that use a split /bin /usr/bin, /lib and /usr/lib, etc. OpenWRT is one of them. However I think the major distros already migrated.
Edit: Nevermind, I am wrong :)
The Systemd convention? What the fuck? Does Systemd have to touch everything??
systemd itself does not depend on being on a UsrMerge'd system, and otherwise, the proposal does not have anything to do with systemd.
From https://www.freedesktop.org/wiki/Software/systemd/TheCaseFor...
"Note that this page discusses a topic that is actually independent of systemd."
It hasn't happened on openSUSE yet:/
If you look at the very early versions of the Unix manuals, before even /lib was invented, there was a notion that /bin was the "system" (section 1) and /usr/bin was "user software" (section 6). This fell by the wayside when they ran out of disk space on / and started putting section 1 commands in /usr/bin. At some point the system/user distinction was abandoned and everything but the games got moved to section 1 of the manual.
(Back in those early days /lib files used to be in /etc. So did /sbin. There's still a meaningful semantic difference between programs any user could run vs programs only useful to root.)
On *BSD there is no initramfs equivalent and the / partition still serves its traditional role of the minimal startup environment. And /home never existed before Linux - it was always something like /usr/home as far as I can tell.
> And /home never existed before Linux
SunOS used /home before Linux existed.It's far from mandatory (I've run Gentoo for years without it) and when you have to fix stuff in the mkinitrd script, you just cry...
https://nixos.org/nixos/about.html
A solution like theirs should become the default. Their transactional, package management too given the number of times my Linux packages got broken by some ridiculous crap. That even a smaller outfit could knock out two problems, one major, shows these bigger distros could be gradually knocking out such issues themselves.
If e.g. Ubuntu did this then
1) They'd have to fix all of the packages in the main repository, and then PPAs?
2) Lots of third party software targeted on Ubuntu would break; this would cause a lot of users to want to stay on pre-change versions of Ubuntu, which probably means it would get forked.
The NixOS site made it look like they didn't have to do that with their solution: just give a meta-view on what's already there for the users.
"Lots of third party software targeted on Ubuntu would break; this would cause a lot of users to want to stay on pre-change versions of Ubuntu, which probably means it would get forked."
This could happen with any number of changes. Might still be worth it for at least major changes like making broken packages recoverable. I can't imagine myself getting off Ubuntu just because package failures can't brick my system anymore. Some developers might gripe about having to make changes but that makes them look like assholes, not Ubuntu.
I mean, VMS had this feature in the 80's with their versioned filesystem and transactions at app level. It's about time at least the package managers of Linux can do same reliably for apps. At least one of them did to their credit.
2) Most third-party software is indeed broken out-of-the-box on NixOS; but patchelf and fhsuserenv have been developed so that it can run regardless (there's steam, flash, adobe reader... all patched to work. Running a .deb is as hard as specifying all of its dependencies).
First there was /bin for the things that UNIX provided and /usr/bin for the things that users provided. Part of the system, it lived in /bin, optional? it lives in /usr/bin.
In SunOS 4 (and BSD 4.2 as I recall) the introduction of shared libraries meant you needed a "core" set of binaries for bootstrap and recovery which were not dynamically linked, and so "static" bin or /sbin (and its user equivalent /usr/sbin) came into existence. Same rules for lib. /lib for the system tools, /usr/lib for the optional user supplied stuff.
Then networking joined the mix and often you wanted to mount a common set of things from a network file server (saved on precious disk space) but there were things you wanted locally, that gave use /usr/local/bin and /usr/local/lib. Then in the great BSD <==> SVR3.2 merge AT&T insisted on everything optional being in /opt which had a kind of logic to it. Mostly about making packages that could be installed without risking system permission escapes.
When Linux started getting traction it was kind of silly that they copied the /bin, /sbin, /usr/bin, /usr/sbin stuff since your computer and you are the only "user" (usually) so why not put everything in the same place, (good enough for Windows right?)
File system naming was made more important by the limited permission controls on files, user, group, and other is pretty simplistic. ACLs "fixed" that limitation but the naming conventions were baked into people fingers.
The proliferation of places to keep your binaries lead to amazingly convoluted search paths. And with shared libraries and shared library search paths even more security vulnerabilities.
Only in the minds of people who don't know much about Windows.
The Windows model is, after all, for globally installed individual packages to be largely rooted at "/Program Files/%COMPANY_OR_PERSON%/%PACKAGE%/" with "%USERPROFILE%/AppData/Local/%COMPANY_OR_PERSON%/%PACKAGE%/" as a root for per-user data. The former dates all of the way back to Windows NT 3.5, the latter "merely" dating back to Windows NT 4 (where it was "Application Data" rather than "AppData").
* https://blogs.msdn.microsoft.com/cjacks/2008/02/05/where-sho...
Unix and Linux people will recognize this as akin to NeXTSTEP's ~/Apps, /LocalLibrary, /LocalApps, and so forth from the early 1990s; and to Daniel J. Bernstein's /package hierarchy (for package management without the need for conflict resolution) from the turn of the century.
* http://cr.yp.to/slashpackage.html
* http://cr.yp.to/slashcommand.html
And a few years after NeXTSTEP introduced its directory hierarchy, SunOS 5 (a.k.a. Solaris 2) introduced the System 5 Release 4 /usr merge.
The Linux (and late UNIX) way was not changing the /usr for each user, but on sharing /usr through several computers. This way, you had to administer just one computer, for a network of any number of terminals.
That become obsolete only after modern package and configuration managers were created.
archlinux has for a couple of years symlinked /bin, /sbin, and /usr/sbin to /usr/bin, and /lib, /lib64 to /usr/lib https://wiki.archlinux.org/index.php/arch_filesystem_hierarc...
If you make your own linux distro from scratch, you can get many of the open source pieces of a linux distro to use whatever layout you want, while some pieces do require patching and fixing ... but you do have the source :)
New FS Layout -> User Adoption -> Profit
Because at the end of the day the layout of the filesystem is never the selling point on any given distro. It always comes down to the software - either the amount of it, or the newness of it, and that is why people flock to Ubuntu and Arch respectively.
It is much more prudent to argue for filesystem improvements in those distros, and get them implemented there, than to fork it out. Gobo demonstrated both the validity and the utility of the approach, but it is up to distro makers to actually use the evidence given that the traditional Unix layout is garbage.
Heck, i think it could adopt any any FS layout it wanted as long as some defined place to house the "Programs" tree was provided.
That's the one I was trying to think of! Yeah, they solved the directory problem. NixOS solved the packaging problem. Little projects doing what they can to fix the UNIX/Linux mess.
1. NFS mounts. Your local or initial BOOTP image has a minimal root, your (non-root-writeable, BTW) NFS /usr has Other Stuff.
2. Mount options. Root is often (and perhaps still must be -- /etc/mtab for example -- I've stopped closely tracking discussion to make root fully read-only) writeable. It may also require other mount permissions, including device files and suid. Other partitions don't require these permissions, and there is some Principle of Least Privilege benefit to mounting such partitions without them. /usr requires suid, but not dev, and may be nonwriteable except for OS updates.
3. Recovery partition. I'll frequently stash a second root partition, not typically mounted. It has minimal tools, but enough to bootstrap the system if the primary is hosed for whatever reason (more often myself than any other). Without a clean / /usr split, this becomes more complicated.
As for the recovery partition, you don't need the split for that, either. Just have a live system on the recovery partition that mounts the normal root FS. Then you can chroot into there for recovery tasks.
I don't recall my precise thinking on a clean root vs. /usr split on the recovery partition, though it may have avoided some confusion over binaries. Or perhaps that you could mount the /usr partition itself independently if you wanted, assuming primary root was hosed.
Not being able to mount a separate /usr would negate that option.
/opt: application payload
/etc/opt: application's configuration files
/var/opt: application's data.
For applications with more than two configuration or data files, it is a good idea to create /etc/opt/application and /var/opt/application.
If your company is mature enough to understand what system engineering is, and employs dedicated OS and database system engineering departments, /opt/company, /etc/opt/company[/application], and /var/opt/company[/application] become very practical. If this approach is combined with delivering the application payload and configuration management as operating system packages, one only need worry about backing up the data in /var/opt, and that's it.
What? Why should it be impossible for a third-party package to just create /opt? They will probably need to extend the PATH and LD_LIBRARY_PATH, but /etc/profile.d is very much standardized AFAIK.
It is not impossible, obviously, but /opt and such should come from the operating system's vendor, and the vendor should be the only one to decide which filesystem permissions to provide: 0755? root:bin or root:sys? root:root? bin:bin? The vendor should decide that, since a vendor is supposed to know their operating system best.
Third parties might not agree, or even decide correctly for that operating system.
This is system engineering and architecture, something which beside operating system vendors, software vendors do not have a clue about in the slightest.
The power of run-as-su installers...
/usr/{bin,sbin} is for stuff you can live without but expect to be there (e.g., make, grep, less).
/usr/local/{bin,sbin} is for stuff you/your local admin installed (e.g., mosh, cmake).
Also, I use $HOME/{bin,sbin} for wrapper scripts, binaries that I need but don't want to install system-wide (single-file C utils that come without man pages, stuff like that).
I'm not sure where the confusion comes from and I don't really see any advantage in merging / and /usr. On the flip side, I do think there's value in keeping /{bin,sbin} as small as possible (because that stuff has to work).
I expect with the increasing moves towards app stores and sandboxing on all platforms that the days of installing packages contents all over the filesystem are limited and things like xdg-app are probably going to take over with an app being mounted into the filesystem in a sandbox as it is run.
1 - Establish a new, consistent filesystem hierarchy. New code should use the new hierarchy. Move the legacy stuff to the new hierarchy and symlink the old locations to avoid breakage.
2 - After some time, make the legacy locations read-only. This will highlight any non-conforming code.
3 - After even longer, delete the legacy paths.
It would maybe be even easier if we could redirect writes (I'm not too versed in filesystem capabilities) from the legacy paths to the new ones.
systemd is kind of enforcing this already, preferring /usr (where most of the packages go) over naked paths. This is the trend on relatively modern desktop systems, which thankfully preserves standard fs hierarchy while mostly keeping everything in one place. (Unfortunately there are bunches of non-desktop uses for Linux out there.)
See also https://freedesktop.org/wiki/Software/systemd/separate-usr-i....
> It would maybe be even easier if we could redirect writes (I'm not too versed in filesystem capabilities) from the legacy paths to the new ones.
Quite a few distros (and users like Rob Landley) use symlinks from /bin to /usr/bin and the same goes for /lib. sbin is sometimes linked to bin. This does some of the redirection.
(I am not aware of the /usr/tmp and /var/etc stuff mentioned in the message.)
> After even longer, delete the legacy paths.
Nah. Millions of scripts are writing she-bangs like #!/bin/sh and I don't really expect to see this being changed...
Workarounds using `env` is kind of crippled since a.) it introduces an extra level of dependency and indirection and b.) it stops the user from reliability passing options to the interpreter (Linux only cut at the first space, with all sorts of other *nixen doing different things with multiple spaces.)
You never really search /Programs for binaries, you aim that at /System/Index (housing the equivalent of the classic FHS for compatibility reasons). This means that things do not really need to live in /Programs to be useful.
And because of that you have two tools DetachProgram and AttachProgram that allow you to place a package (really just a tar-ball of the /programs branch) in an arbitrary location and have them be symlinked into place.
Thing about Gobolinux is that it do not depend on special Gobo only features. Everything is built around tools you will find in any *nix, and any action can be performed manually if need be.
Nor does it demand that something upstream do something special to placate them, as long as --prefix or similar actually works.
This also does not mean that GPLv3 makes software signing impossible and that you must forbid users from rejecting unsigned software if they so wish. It doesn't mean that you have to distribute every secret key that you use for signing software. It merely means that you have to give your users a way to install software on the User Product as they see fit, if they see fit. It's up to the user to decide to override any signing feature or not. It's very much in spirit with GPLv2 that required installation scripts. As far as GPLv3, hardware signing keys are just another part of installation scripts (the actual term used in GPLv3 is "Installation Information").
http://radar.oreilly.com/2007/03/gplv3-user-products-clause....
https://copyleft.org/guide/comprehensive-gpl-guidech10.html#...
It solidifies the freedoms the FSF champions.
GPLv3: as worthy a successor as The Phantom Menace, as timely as Duke Nukem Forever, and as welcome as New Coke.