Reminds me of the time I wrote a script that called 'hostname -x' on SunOS instead of Solaris and it changed the hostname to '-x' and broke X11. RHEL is the nostalgia Linux.
But seriously, has anyone ever empirically verified that the Debian Stable/RHEL model of shipping a bunch of really old packages and then layering years of patches over top actually generates more stable, more secure code?
My intuition after a couple decades of software dev is that bugs will fester longer in the old version and the patches themselves will start having bugs as the top of tree diverges more and more from the shipped package over time.
Thus, if you install some random vendor's shitty software you can rest assured that the version of libcurl and 50 other libraries they depend on is something they themselves have tested on RHEL.
The same goes for hardware that you buy. When you buy e.g. Dell rack-mounted servers you can safely assume that the open source driver version maintained by the vendor shipped as part of the RHEL kernel is something that's seen extensive production use, unlike the latest upstream kernel, or whatever "in-between" Debian et al are shipping.
Am I recommending you use RHEL? No, it's not the right answer for everything, and I certainly have my share of RHEL scars, including a couple of times where a mundane bug in my program turned out to be a kernel bug (one in RHEL's own shitty patches, another "known" bug with their ancient kernel).
But this is the reason to use it, and why some major commercial vendors say "we support Linux, as any distro you want as long as it's on this list of RHEL versions". They just want to deal with those kernel/library versions, not any arbitrary combination out there in the wild.
Well, I think your definition of 'stable' is different than what RHEL/Debian customers think. Stable isn't seen as "doesn't have bugs", its "works predictably". Which is a subtle but meaningful difference.
Debian has released a new stable version every 2 years for the last 14 years. RHEL/CentOS are the only ones on a 3-5 year cycle.
Someone needs to thaw Debian out.
The fact that there's a freeze to allow for shaking out troublesome issues in a few packages (and possibly discover ones you didn't already find in older ones) without much risk of others newly breaking is a feature, not a bug.
Debian testing/unstable, backports and third-party repos exist if people really want the latest anyone's packaged, or the latest version of one specific thing on their otherwise stable system.
You may disagree with the philosophy, but every part of that behavior is working as intended.
EG: let's say libfoo.so.1 implements DO_FOO; libfoo.so.2 implements DO_FOO2, but not DO_FOO. In this case, anything you need that links to libfoo.so.1 and needs DO_FOO would need to be patched, recompiled, and shipped out to all your customers. For the distribution provider, this is not really a huge deal. But RHEL is merely the platform. The value-add is that 3rd parties can write software and compile against libs and know they're not going to break arbitrarily.
Similarly, if you've ever written a kernel driver, you'd know that kernel function names and signatures can change from release to release. The same example above applies to kernel code as well. So, compiled binary drivers would have to be patched and recompiled, and shipped out. If you're writing a driver for a network card, would you prefer having to ship your (non bug) driver updates every few months, or every few years?
But the goal of the long-term-stable approach isn't security or stability per se: it's striking a tradeoff between operator work and risks to security and stability. You could, of course, snapshot Fedora (or Debian testing, or Arch, or whatever) from 2013 and keep running it. Nobody is stopping you, and it'll still run on new machines. And then you have to do zero work to keep your system up-to-date, but you'll likely have tons of security and stability bugs. On the other extreme, you could run Fedora rawhide (or Debian unstable, or current Arch, or whatever) and update nightly, which would mean you get security fixes as fast as possible (they're almost always developed on HEAD and backported to release branches), and you get performance and stability fixes that people haven't deemed worth backporting, but you also risk API-incompatible changes that break the actual applications you care about. You'll need to set up really good CI to make sure you have coverage of everything in your application, and it's not just a matter of automation: you'll need a well-staffed team to respond quickly every time that CI goes red, figure out what changed, and update your applications to match. (And, of course, you have the risk of security issues in new code that hasn't been subject to public scrutiny yet—the inverse problem of security issues in old code that's no longer subject to public scrutiny.)
The goal of a long-term stable distro is to be in the middle of those two, to give you something that changes rarely (stability in the sense of "no surprises," not "doesn't crash in prod") but often enough that you get major, identified security fixes and particularly safe performance (and stability-as-in-"no longer crashes in prod") fixes.
And yes, part of the goal of a long-term stable distro is that it provides you measurable security and stability over unmeasurable but potentially greater security and stability. They don't fix every CVE, but they do fix the flashy ones. You can look at it cynically and say, this is the distro for people who want to tell their boss "Yes, we patched Heartbleed and Shellshock" but don't inherently care about security. But on the other hand, flashy vulnerabilities are more likely to be exploited, so it's not a particularly bad tradeoff.
It is a GOOD thing to run old versions, for purpose, by your personal choice. It is NOT GOOD to have help by force, and in the US law system at least, many individual rights are based on this assumption, even with some inevitable negative outcomes. Please note that in many parts of the world, and in many kinds of organization, this trade-off is NOT made, and quite a few fundamental technical decisions are going to be made along the lines of 'do it, there is no choice'
And then you have bugs being fixed on master (sometimes silently), and the backport maintainers fail to backport them.