We run a lot of CentOS 5 virtual machines (and some physical ones! ; and some RHEL4! , and a few Fedora core 8 and 4 !!!), with no end in sight... :(
It is a huge concern for the Infra team, a source of many headaches, and we need to go through oops to keep them running, but:
- Clients don't want to move from OLDPRODUCT that requires extremely old php.
- Dev team is not interested in migrating OLDPRODUCT to a modern platform, or even try to put it in a container. Their eyes are turned to the shiny NEWPRODUCT that is seemingly never fully coming to production (only one client has signed for it).
- New clients are still regularly signed on OLDPRODUCT.
- No one in the org wants to pay for for a migration anyway.
- Since some clients have complained about poor apparent security, what was visible was just hidden behind newer haproxy.
Surely this wouldn't take more than 2 weeks: just figure out the install instructions for the old piece of software, rewrite them as a part of a Dockerfile (or similar set of instructions to build an OCI image, there are other options out there, too), setup some basic CI/CD which will execute Docker/Podman/... build command and update any relevant documentation.
I actually recently did that with some legacy software, apart from additional optimizations like clearing yum/dnf cache after installs, it was pretty quick and also easy to do! If you are also in a situation where you need to use EOL software, I don't think there are many other decent options out there, short of trying to reproduce the old runtime on a new OS (as others suggested).
Running the old EOL OS will simply leave you with more vulnerabilities with no fixes in the long term and is generally a really bad idea. How did your security team greenlight that idea? In some orgs out there the devs would be made to stop doing whatever they're doing (outside of critical prod fixes) and would be forced to migrate to something newer before proceeding.
>- New clients are still regularly signed on OLDPRODUCT.
I mean what's the WHY behind that? Why don't even new customers sign on to the new product? Why is the new product not in production? Is that the same reason?
[1] - https://linux-audit.com/increase-kernel-integrity-with-disab...
I know that problem for a thankfully long gone Java internal application, and well... I went with running old stuff in `debian/eol` Docker containers [1]. Turns out you actually can use Docker as a sort of extremely lightweight VM service.
If they're not providing security fixes for RHEL 6's packages, then why does this matter?
(+) or rather too many.
$ du -sh /lib/modules/$(uname -r)
294M /lib/modules/5.10.0-15-amd64
Just build the kernel and set the right options, this is for a Dell XPS13: https://github.com/jcalvinowens/misc/blob/main/kbuild/config...
It takes a few hours to whittle it down for a particular piece of hardware, but I've never broken anything on Debian by running kernels built with CONFIG_MODULE=n.
* Edited for clarity
(you must reboot to re-enable module loading)
Useful on servers where specifying all modules to load is practical (netfilter modules are usually the only new modules unless hardware changes). But, on a workstation, doing so will be very frustrating unless you never plug in any new usb devices etc.
If you know what the devices you are likely to plug in, you could just modprobe them all before disabling it.
Edit: I should mention, this will either result in a massive kernel that consumes a lot of memory, or in very little driver support and your machine will not tend to just work when you plug new devices in. Linux has a lot of drivers; there's a reason why it uses modules.
Ubuntu might not be able to distribute said "no module" kernel, but it might run.
https://github.com/c-blake/kslog has maybe a little more color on this topic, though I'm sure there are whole volumes written about it elsewhere. :)
EDIT: But maybe your "game over" point is just that it is kind of a pipe dream to hope to block all concealment tactics? That may be fair, but I think a lot of security folks cling to that dream. :)
~ # zgrep -F CONFIG_MODULES /proc/config.gz
CONFIG_MODULES_USE_ELF_RELA=y
# CONFIG_MODULES is not set
CONFIG_MODULES_TREE_LOOKUP=y$ uname -sr
OpenBSD 7.1
$ du -sh /bsd
22.0M /bsd
>> vermagic=2.6.32-696.23.1.el6.x86_64 SMP mod_unload modversions
do you know why they say "approximately match"? I thought it had to match exactly so that the kernel accepts to load the module
The greater the difference between the kernel version you compiled for, and the kernel version you are trying to load it on, the greater the chance something you are relying on changed and the module loader cant resolve all the symbols and so it fails.
So saying a kmod has to match the kernel version is good practice but the reality is not quite as strict.
Red Hat has a list of "white listed" symbols that they try to maintain across a major version of RHEL so if your kmod only relies on them and nothing else then it should load on any kernel version within that release. But that's a Red Hat thing, not a Linux kernel thing.
(Tradeoff of runtime DIY symbol resolution / code grovelling being it's more work, and more likely to be crashy).
As a rootkit author you have considerably more flexibility than most module authors who are constrained by "sanity", maintainability, accepted practice and licensing terms.