Yes, people/sysadmins should take their time to properly configure SELinux when things don't work, instead of just disabling it completely for good. I tried for a whole year in a place where we used CentOS, and then finally I gave up, too many hours wasted in finding the right conf for this new program or configurations etc.
A system which is easy to use securely will stay secure, a system which is difficult to use securely will be insecure.
I used to work at a payment provider and we had to deal with lots of monitoring and security stuff. Some of it was (obviously) busywork and needless checkbox filing, but other parts were genuinely useful. Setting up systems was tedious and difficult, but ultimately worthwhile and necessary.
-- A developer whose app needs to run as root (for a well-documented reason, and with a tight systemd sandbox hiding most of the filesystem from it)
SELinux is complex, badly documented, policy code is obscure macro incantation, and basic debugging tools often aren't installed out of the box on server distros (such as audit2allow). But for the day to day administration of systems, policies are included for distribution packages and most issues can be fixed by enabling a boolean here and there, and relabeling files.
The principles, basic admin & debugging part can be learned in a couple of hours, and when you have custom service software, you can throw it in /opt and have it run unconfined (ie: not subject to SELinux rules).
And I don't agree with the article that containers do not add security. Container runtime implements namespace isolation, seccomp filters, etc. and that reduce the attack surface, comparing to running the software directly on the host OS. More importantly in this discussion, it is convenient for sysadmins.
There is no perfect security anyway. And I don't sacrifice convenience for national security level security :)
Most of the places who used CentOS for a 'free' Linux ended up shutting it off, though.
> The policy language and tooling is cumbersome, obtuse, and is about as appealing as filling out tax forms.
If a security framework is so terribly complex and hard to use, then people won’t use it.
OTOH, look at how OpenBSD or OpenSSH approach security: simply primitives which are well documented and easy to understand.
The only reason SELinux even works in the few scenarios where it does, is because the operator had immense amount of resources to pour into it. This itself is another sign of how bad the design is: it’s so complex that no small team of humans have ever been able to use it.
Sometimes, when the developers make a mistake, which is unavoidable in a large project, it is nice to be able lock down applications as the administrator.I just don't think SELinux is the right tool, because the chance of you making a mistake in the configuration is pretty high. The functionality is there, but it needs to be easier to write policies and maybe that comes at the cost of some flexibility.
I'm hoping it sticks. Just check audit logs when you get an error, it is not that hard, right?
These days that's just laziness/fear, and it's a shame to see it. It doesn't even make sense any more.
One crucial change for the better was leaving third-party software in a permissive state. From that point onwards, disabling SELinux is cargo-cult sysadmin'ing.
SELinux is not hard if you understand its basic principles. But no one bothers, because SELinux is the bogeyman.
Yes, writing policies means getting knee-deep in macros, and it's hard because many services try to access anything and everything. But almost no one needs to write a policy.
At most you need to tell SELinux that some non-default directory should have some label. That's not hard.
But that's exactly what I would like to do! I've never seen a real guide for how to set up a policy for a custom daemon I wrote myself. Or when a specific software doesn't come with a policy.
In my experience, it's not just directory labels ("semanage fcontext -a -e ..." and friends). You also need once in a while to set some booleans ("semanage boolean ..."). Yes, it's not hard once you know about it.
We don't disable SELinux.
SEL seems to work under the premise that if it’s too complicated for you to use, the attacker has no chance.
Few will instead read the RHEL provided documentation. Then they could maybe figure out whether there's simply a tunable (getsebool -a) which would enable the desired behavior, or if properly labeling files (semanage fcontext / restorecon) would do it, or even take the steps to add to an existing policy to allow for a specific scenario which somehow was not implemented. Even adding your own policies "from scratch" is certainly doable and provides a great safety net especially for networked applications.
Anyway... we all know disabling security or not implementing it in the first place can really save you a lot of time. At least in the short run.
The way I put it to my clients, and staff, is simply that security comes at the cost of convenience.
This is so funny because whenever I suggest Fedora Silverblue to a moderately experienced Linux user who wants a simple distro, the first thing I do is recommend turning SELinux on permissive mode, and I get a bunch of comments hand wringing about how you shouldn't do that.
It's almost like a silent filter working in the background of your OS that doesn't even tell you when it blocks something is a pretty user hostile feature and no one wants to learn how to speak SELinux so they can effectively use it.
Sometimes it seems like Linux people don't want others using it. Even when they belong to evangelist platforms, they like to create huge barriers for entry and then blame new users for not "getting it."
SELinux does better or worse, depending on perspective, by actually enforcing controls
Sidenote: I don't like the implication that community-driven projects are inherently less secure.
> Lack of Resources: Debian as a community-driven project lacks the resources to develop and maintain comprehensive security policies comparable to those provided by Red Hat.
For shared server usage. Most servers are single-use, what makes SELinux mostly useless again.
And on those shared servers, you have to define your actual policies for it to be useful... What a total of 0 people do.
It's hard to completely dismiss the idea that SELinux was a NSA plot to keep userspace capabilities out of reach on consumer OSes.
It should be trivial to dismiss given the widespread usage and real world advantages it provides.
And no, a single use server doesn't make SELinux useless. It still means SELinux can lock down whatever services are offered on that box better than pretty much anything else can.
Indeed, since the dawn of virtualization and automated deployment, shared servers are a legacy behavior. Well, on Debian's world, at least : for RHEL, you may pay per instance, so there is a financial incentive to share said instances.
Ergo, RHEL and friends are inherently less secure than Debian.
I don't like it either, but it may be true anyway. Although I don't think it would be resources so much as focus. The Debian community is not that small.
RedHat can declare that everything on the system is going to have SELinux policies following consistent guidelines on what to lock down, and all employees will work with the security team to make this happen. That is harder to do in a community driven project like Debian where ownership and work is widely distributed and entirely voluntary. It can really only happen when the goals are already a strong part of the culture and there is buy-in for specific rules to achieve those goals. For example, Debian's strong free-software requirements have been there from the beginning and so most Debian volunteers are self-selected to agree with or at least tolerate them, and even that has frequent arguments. Security culture is much more mixed, and there are a lot of people in the free software community who think that security starts and ends with fixing bugs when they are found, and push back hard on suggestions that anything more is needed. It is going to take a long time to change that culture.
The former strongly implies that, if you're using it for the latter case, then you really better know what you're doing. But this capability/competence versus task-fit gets glossed over in the paragraph where the author basically says; because Redhat chose to be a bag of dicks, jumping ship to Debian is the "logical move". It isn't if you don't know what you are doing. And it's sad that RH exited this space leaving a civil cybersecurity hole. The lack of a truly Free and "OOB secure" OS seems the case in point.
There are other reasons to doubt the security of Debian, but "you're using it wrong" isn't the best one to discuss.
As a heavy open source contributor, I don't like it either. But I'd be kidding myself if I thought volunteers approach all aspects of software development with the same rigor as someone doing it professionally. I'm guilty of that myself; I do the things I find fun, and often don't do the things I find tedious (or have to force myself to do them because I know that future-me will be pissed off at present-me if I don't).
Still, though, there are plenty of for-profit organizations out there that don't feel it's cost-effective to be rigorous about security or some other thing. And many (most?) developers and ops people are evaluated not on how bug-free and secure their work product is, but by how quickly it gets done and shipped to customers.
Does, like, anything on mainstream Linux distributions really sandbox applications by default? Let's say I run a browser, a mail client, Signal, Discord, whatever on my laptop. If one of them has a code execution vulnerability, does anything prevent that app from reading/writing all of my home directory, take screenshots, send keystrokes to other applications etc?
I haven't used anything but Linux on my laptops and PCs for at least a decade, and I genuinely don't know the answer. Back when I started with Linux, the answer was surely a "no", but maybe anything something has improved in this regard?
I don't know much about the specifics but I think Wayland fixes a lot of the security problems related to keylogging and screenshoting.
Given that Google uses Debian internally for their workstations [1], employs a number of Debian developers [2], and has discovered and fixed security issues in Debian [3], I find this argument to be entirely disingenuous.
Sure, Red Hat has a well funded security team. But so does Google, and all of the other Debian users in "big tech".
[1]: https://en.wikipedia.org/wiki/GLinux [2]: https://www.reddit.com/r/debian/comments/j4liv4/comment/g7mm... [3]: https://lwn.net/Articles/676809/
The point made in the article is that security is hard and often thankless work. So it's not something that's conducive to volunteers doing in their free time often. It does take funding to move the needle on this here, and I think Red Hat is proof of that.
To the extent that containers are a software distribution method outside of a single authority, they are a security nightmare. They are the exact equivalent of shipping a developer's laptop off to the datacenter and replicating it as a production image.
If you're building your containers on a developer laptop and then pushing them to the registry from there, yes.
You can also not do that and instead have all builds happen on a CI server that isn't ever touched directly by anyone, like you should really be doing to build any artifact that gets deployed to production, container or otherwise.
The reason docker containers are absolutely everywhere is that it's a convenient way to ship software that skirts around the notion that most Linux distributions are spaghetti balls of needless complexity with distribution and version specific crap that you need to deal with.
Back in the day I had to package up my software as an rpm to give it to our ops department who would then use stuff like puppet to update our servers. I also got exposed to a bit of puppet in the process. Not a thing anymore. Docker is vastly easier to deal with.
From a security point of view the most secure way to run docker containers is some kind of immutable OS that only runs containers that is probably neither Red Hat or Debian based because having package managers on an immutable OS is kind of redundant. Which is more or less what most cloud providers do to power their various docker capable services. And of course the OS is typically running on a vm inside another OS that probably also is immutable.
Docker removed the need for having people customize their servers in any way. Or even having people around with skills to do things like that.
Being container focused also changes the security problem from protecting the OS from the container to protecting the container from the OS. You don't want the OS compromised and doing things it shouldn't be doing that might compromise the one thing it is supposed to be doing: running your docker containers. Literally the only valuable thing it contains is that container.
And it indeed matters how you build and manage those.
I hear that a lot, but it's not really true, or it is true only if developer created the image manually. Does anyone do that?
As soon as you use a Dockerfile you have reproducible builds, allowing you to use a different base image, or even perform the installation without containers at all.
That is extremely optimistic. As soon as you do anything involving an update - `apt-get update` or similar - it's not reproducible any more, and of course you do need to do those things in most images. And if you don't need to do that, you can probably avoid doing the whole Dockerfile thing in the first place (although that may not be so easy if you're not set up for it).
Depends on how you build your containers. If you have a build step, which pulls your dependencies from a trusted source and versions are locked down, then MAYBE. I've seen developers have all that in place, then in their deployable container they start by doing "apt-get update && apt-get upgrade" in the Dockerfile and install some runtime dependency that way.
There is also another problem, which I believe is what OP is referring to: People will write docker-compose file, Helm charts and what-have-you, which pulls down random images from Docker hub, never to upgrade them, because that breaks something or because: It's a container, it's secure. Fair enough if you pull down the official MariaDB image, or KeyCloak, you still need to upgrade them, and often, but they are mostly trustworthy. But what happens when your service depends on an image created by some dude in Pakistan in 2017 as part of his studies, and it has never been upgraded?
I had this discussion with a large client. They where upset that we didn't patch the OS right when new security updates came out, which to me was pointless when they shipped a container with a pre-release version of Tomcat 8 that was already 18 months out of date and had known security flaws.
As for the efficacy of the two, I'm less interested in the feature sets of the two. I think what'd be more interesting is replicate exploitation scenarios with their default policies and see which subsystem succeeds in mitigating the exploit and which fail.
The feature set is exactly what dictates which systems are more likely to prevent exploitation, though.
App Armor simply isn't as granular, and simpler to bypass (e.g. by making a hardlink to a file to override AppArmor policy).
AppArmor may be good enough in many situations, but SELinux gives you much more control, so you can be much closer to perfect to protect against unknown situations.
As a datum, I have a laptop that's running Fedora, the install is on the order of ten years old (routinely upgraded to new releases), and it's never had SELinux disabled.
> Still. Many in the open source community have interpreted Red Hat’s decision for what it really was: A dick move.
I've had a short essay in draft for a while about the difficulty of a small business trying to make money using The Red Hat Model (https://opencoreventures.com/blog/2023-04-red-hat-model-only...). Red Hat seem like an outlier who're doing well with that model, but smaller places like Sidero or Bitfield had to find other ways to monetise their open source efforts, and sometimes that had pushback from the community.
Red Hat, though, were acquired by IBM, and IBM made it harder for an otherwise thriving ecosystem to exist. Not impossible, but harder. IBM makes money hand over fist (billions according to https://www.ibm.com/annualreport/). Was there really a reason to make Red Hat harder to redistribute? The interviews I've read come down to "our Red Hat team works hard and we don't want to give that away to low effort projects", though if you've got an interview with a different perspective I'd love to read it.
Ubuntu and Debian are for hobbiests and lawyers, and you should never run a public server on debian/Ubuntu if you care about security.
And Linux in general has less resources to develop and maintain comprehensive security policies comparable to those provided by Microsoft.
Yet here we are, with Microsoft products so "secure" that they're insecure unless you have a PHd in b****, being so convoluted and over-built that people have to migrate away from it just to recover the actual security they used to enjoy back when they were able to wrap their head around the whole stack.
If devs want things to be more secure, stop developing more acronyms and just educate the userbase on the acronyms they already have.
https://arstechnica.com/security/2024/01/microsoft-network-b...
Wasn't caught for two months and wasn't fixed until months after. How is Microsoft allowed anywhere *near* the bidding process for gov contracts anymore?
Certainly SELinux has its place but I never found the value it offers to be worth the complexity it adds.
This is akin to someone writing an article about how Oracle and Microsoft got databases wrong because they didn't embrace some security feature that only DB2 has and that more than half of DB2 users out there think is a giant pain in the neck.
> Learn the basics of SELinux, including type enforcement, Multi-Category Security (MCS) Enforcement, and Multi-Level Security (MLS) Enforcement, with the help of some friendly cats and dogs!
Complexity is generally really bad for security. It results in people working around the system or just turning it off. Security is not just "in theory" - a perfectly secure system that most users disable is an insecure system.
It reminds me a bit of the idea of making people change their password every month. Sure, in theory it reduces time a compromised credential can be abused for. In practise though it means nobody can remember their password, people start using really poor passwords and writing them down on post it notes. The net result is much worse security practically speaking, even if its better theoretically.
Debian uses AppArmor by default, probably because of the Canonical influence (there are more Debian developers and maintainers paid by Canonical than by RedHat).
But you can run Debian with SELinux (as well as with other LSMs, MACs, etc like Tomoyo).
At my last jobs, we disabled any of SELinux, AppArmor and Auditd on Debian/Ubuntu, just for the sake of performance. And we never detected any security issue for our usage and requirements. So I'm not an expert in this field.
Not sure what the purpose of the article, or the whole blog, is. You want to influence the choosing of Debian Vs RHEL Vs Oracle Linux in some place? As I'm not sure, will stop here.
I do a lot in Kubernetes, and there's been more than one CVE with a line like "Affects all versions of docker/containerd, unless running SELinux," which gave me a lot of reassurance that the effort put into making SELinux work was worth it.
Now that I'm on Debian, I'm slowly building a new set of policies for the OS. Thankfully SELinux has an excellent reference policy[1] to base off of. I'm hoping my new debian base images for my homelab & elsewhere will have a nice restrictive default SELinux policy by the end of the year. I hope there's more community effort here as well, SELinux really can't compare to AppArmor, and is absolutely necessary for proper security.
Honestly I'd love if the wider community took another stab at replacing SELinux with a KSM that had similar functionality but better tooling and design. I'd pick it up in a heartbeat, but right now SELinux is what we have.
I've seen this too, but I usually see AA mentioned in the same situations as an equivalent mitigation to SELinux.
- CVE-2016-9962 - Bypasses Apparmor [2], mitigated by SELinux
- CVE-2022-0492 - Apparmor and Seccomp also protect against
- CVE-2019-5736 - Mixed, blocked by the default SELinux policy in RHEL (not Fedora), not blocked by the default AppArmor policy[3]
- CVE-2021-3156 - This one is not a good one for RedHat to put on the list. SELinux by default doesn't protect against it, Debian 10 at the time had a Linux security feature enabled (fs.protected_symlinks) that helped mitigate it, and additionally CVE-2021-23240 came out which had similar effects but only occurred on SELinux systems.
- CVE-2019-9213 - Not mitigated by AppArmor, mitigated by SELinux
- CVE-2019-13272 - Not mitigated by AppArmor, not mitigated by default SELinux policy, but easy to mitigate by enabling boolean. I'd consider this a win for SELinux, but only just.
While digging into this more, I came across this BlackHat talk[4] which really quantifies how SELinux improves security (though doesn't contrast it with AppArmor). I also came across a paper on usability of SELinux and AppArmor[5] which brings up an interesting point: If the tool is too complex, even if it's more powerful, more often than not it won't end up having better results.
That's all to say, I think if you're willing to invest a lot of time into it (say you want to make security your niche in your development career), SELinux is still the best. But I can see why many may gravitate towards AppArmor so as to not make perfect the enemy of good. That said, I still wish Debian had a choice between the two, right now SELinux isn't really doable without a lot of work.
[1]: https://access.redhat.com/solutions/7032454
[2]: https://github.com/opencontainers/runc/issues/2128
[3]: https://www.cloudfoundry.org/blog/cve-2019-5736/
[4]: https://www.youtube.com/watch?v=EkL1sDMXRVk
[5]: https://researchportal.murdoch.edu.au/esploro/outputs/journa...
SELinux can be frustrating without the proper background about what it is, how it works, and how it helps you. There is a surprising amount of tooling for it actually.
“Red Hat owned making this policy apply to most of the popular software they distribute. On Debian the users have to set everything up.” — this sentiment is directly parallel to how BSDs see themselves as providing a whole consistent operating system, Linux meanwhile just wants to ship a kernel.
“Debian doesn't care enough about security.” — says everyone who runs OpenBSD.
“With SELinux policies, containers are isolated from the system.” — you could almost say they are “in jail,” maybe we could package this up as a syscall, hm, but what to call it...
IDK what BSD looks like in 2024, but in ~2004 you would have seen this exact same article about Debian, but comparing to FreeBSD instead of RHEL.
Linux is just an OS kernel. If you want a consistent OS, use RHEL, Ubuntu, Fedora, Android or something else.
> The ugly truth is that security is hard. It’s tedious. Unpleasant. And requires a lot of work to get right.
I use Red Hat-based distributions at work and Debian/Ubuntu in my personal life. A few years ago, I bit the bullet and learned enough of SELinux to run my workstation and all my servers in enforcing mode. The author of this article is right to credit Red Hat for all the work they’ve done to provide users with default SELinux policies that work out of the box. At one time, I considered installing SELinux on my Debian system and modifying Red Hat’s policies to work with the Debian packages. I realised how much work would be involved so I chose the path of least resistance: AppArmor (which does the job).
These aren't "attack surfaces left exposed" this is "users allowed to control their own computer and decide for themselves". And I notice the vast majority of this complaint about insecurity is not about running applications on Debian or RHEL, but instead about the systems built up for running things containerized and trying to mitigate all the problems that causes. Debian concentrates more on actually having an OS you can run applications on rather than a system for deploying containers.
>In the end, the choice between Debian and Red Hat isn’t just about corporate influence versus community-driven development. It’s also a choice between a system that assumes the best and one that prepares for the worst. Unfortunately in today’s highly connected world, pessimism is a necessity.
In the end it's about weather you think you should control your computer or weather someone else will control your computer. Pick appropriately for context.
I suspect Debian is used on more server installs than desktop ones. While it doesn't come with enterprise support options like RedHat it is most certainly used on servers, many of which are in corporate environments and are running multiple services (in containers often) or are otherwise multi-user.
Debian is many other things as well!
The real risk comes from network-facing services and they are much better protected by seccomp and cgroups, usually configured in systemd, and Debian uses that extensively.
Seccomp can even protect vulnerable system calls. SELinux is not able to do that.
`systemd-analyze security <service unit name>` gives a nice list of things to consider tweaking. You don't have to fix everything or pay attention to the exposure ratings, just use it as a guide.
I did this for chrony, haproxy, nginx, tor and unbound on my Debian router. I also have some timer units to run scripts to update DNS blocklists and such, which have the same kind of hardening. For the services, some of them have caveats and can't be fully hardened, eg unbound needs PrivateUsers=no because it fails if it can't find an unbound:unbound user to switch to, even if it was already started as unbound:unbound by systemd. And SystemCallFilter makes it easy to get overzealous and only allow the exact set of syscalls that you see the service making, only to have a service update or glibc update that starts making a new syscall and requires another round of debugging, so do it in moderation :)
If an attacker gets execution in userspace, it's best to assume they can also get into the kernel via some 0-day local privilege escalation...
There needs to be better graphical tools for this, like a "profiler" or similar that watches a process for a specific time for errors in the config and that incrementally adds features while the process is running.
In my opinion, systemd sandboxes are where it's at. [1] They are seccomp based sandboxes, but have a lot of isolation and sandboxing features that are very easy to use, and they can also be incrementally enhanced with both SELinux and AppArmor profiles.
[1] "man systemd.exec" or https://manpages.ubuntu.com/manpages/bionic/man5/systemd.exe...
That way if someone does manage to break out of a container they have the privileges of a dummy user that doesn't exist on the host, so unless they are using a kernel exploit, they don't have any privileges to be able to do any damage.
That's not to say on very specific systems that need to be hardened, I do enable selinux and am glad it's an option. And if I have to use a security layer, I take the object based selinux over the path based apparmor.
RedHat have such bad moves, deploying unfinished, instable and unsecure softwares.
Today most security breaches came from crappy applications with an immense set of dependency put into production because someone want them, there is no protection for them, adding long and painful system stuff is only a way to have also badly configured systems.
Debian issues are more in a complex custom setup, preseed it's a nightmare compared to NixOS, which is much more important than SELinux, regularly disable on most deploys.
I wonder how many people that agree with this nonsense position also agreed with the keepassxc nonsense position.
You have to do 100% coverage testing on whatever program you're using. (Good luck if you don't have the source code.) Otherwise, you don't have any guarantee that your program won't seemingly be killed randomly.
Good luck, x2, if you have some snake oil "endpoint security" that keeps overriding your SELinux policy changes.
"Generate a SELinux policy for daemon X. This daemon accesses it's config file in /etc and it's runtime data in /var/x. It listens on network. All other activities should be disabled"