[1]: https://blog.centos.org/2020/12/future-is-centos-stream/
For years CentOS promised long-term stability and "boringness", which was its main selling point. A very long support cycle was its main selling point. It's why people installed CentOS, and now, suddenly, that's gone.
I don't think it matters that it's free in this case; they made a commitment/promise and suddenly went back on that. I wonder what all the sponsors[1] think of this; I'd be pretty darn peeved if I would sponsor something like CentOS, and it suddenly shifted direction like this.
Fedora is where new and shiny lands, with a release schedule of every 6 months, and ~1 year of support per version, with minimal backporting of bugfixes and frequent package updates. Lots of packages (including the kernel) update freely. That would not fly in CentOS.
CentOS Stream is the next minor version of RHEL. There is lots of backporting patches, ABI stability, the works. And it is supported for as long as RHEL is supported (the standard tier anyways) because it is the next minor version of RHEL.
For a visual metaphor:
Fedora ---------------------------------------------> CentOS Stream --> RHEL
The development process works basically like this:
1) A new RHEL release is created from a rough snapshot of Fedora. It's not an exact copy of Fedora, a fair number of changes are made in the process.
2) Fedora keeps moving forwards quickly, RHEL stays put
3) CentOS Stream takes the most current RHEL release and starts layering updates on top of that
4) After a couple of months these updates from CentOS Stream are then pushed into RHEL as a new point release
5) Repeat steps 3 and 4
Debian Unstable -> Testing -> Stable -> Oldstable -> Oldstable with LTS
Fedora -> Centos Stream -> RHEL -> RHEL with LTS
Opensuse Tumbleweed -> Leap -> SLES -> SLES with LTS
Debian supports upgrades with major versions, and doesn't bump package major versions between minor versions.
SLE/Leap don't support upgrades between major versions, and bump package major versions between minor versions.All of the people I deal with there are the same as before the acquisition, so things have not changed much for me personally, yet. But I am looking at building replacements for certain tools we depend on; the writing is on the wall.
Most businesses today are founded, then the founders are bought out/merged, getting cash in the process.
Then the new owners change things to effectively profit on the business at the expense of longevity, which causes discontent in users and starts the slow downhill slide to the product's death. Then as the product becomes less relevant and loses market share cuts are made to preserve profitability and lower ongoing costs.
Once the business has extracted all the value from the product, they then drop the product entirely but retain the IP for it, preventing anyone else from resurrecting it, thus suppressing competition.
Or sell it to Micro Focus
I assure you, it is very easy to be unhappy with a company for screwing over its users, even if they think it might net them more revenue (bonus points for this being a questionable assumption)
When the first clones of RHEL appeared, they received C&D letters about the use of "Red Hat" in the name, so they complied and started to replace the branding before recompiling. Who would expect that the only free-as-in beer RHEL clone we'll be able to use will be Oracle Linux.
As a user of Linux as my main OS since 2005, and using it partially for years before that, I think another issue is that the quality of software releases is just much higher than it used to be. There used to be a tradeoff between "trustworthy" and "recent". These days it's more "possibility of a problem" vs "absolutely rock solid".
And of course the general move to the web. Apps that run in your browser no longer need to run on a server. When I started in my current job in 2004, there was a Debian stable server used for teaching. (It might still be in use for all I know.) This semester I used CoCalc. That's one less use for an ultrastable server.
CentOS is mostly a server distro, I bet 99% of installs are without GUI. So "possibility of a problem" is a no go for a server.
Edit: here's the HN post at the time:
Essentially this means that the idea of different distributions for LTS, stable, beta, alpha, bleeding edge, etc., goes away completely. You are never forced to update an old package, nor prevented from updating a new package. You get the best of all worlds.
And since the kernel is about as secure and performant as you can get, you essentially always have the latest kernel. Drivers get updated as necessary (defined by your policy, not the distribution's) in userspace, potentially as quickly as the moment they are released, with no downtime whatsoever.
For the kernel, I'm curious how your suggestion is better than Linux is already. Linux, today, is already a performant, secure kernel with an incredibly stable userspace-facing ABI.
For packages, the problem isn't so much being able to install old and new packages (although that's certainly useful); the problem is maintaining a stable version while still fixing bugs and security issues. It's no good just having a packaging system that lets me run a 5-year-old glibc with a fresh-from-git application server if that ancient glibc has multiple known exploits in it. The work in an LTS system is carefully backporting fixes to your chosen old version while holding its ABI stable.
You're right that LTS work doesn't go away...bugfixes will still need to be backported to old software versions. But that work is actually quite a bit easier when it is not so tightly coupled to kernel versions and repositories that are unique for each distribution and release version and architecture.
That complexity is a combinatorial explosion. Instead of having a different codebase for each (PackageVersion,KernelVersion,Distribution,Release,Architecture) combination, you would only need to maintain a codebase for each (PackageVersion, Architecture) combo...and maybe for packages which are trivially cross-compiled, even fewer.
A proper analogy is probably a city block in NYC. Lots of buildings of various eras with the only commonality being utilties in and out plus the ground they sit on.
Although periodically buildings are knocked down and rebuilt, this is an exceptional circumstance.
I actually worked at Red Hat a few years back and almost all work was released upstream-first. The one time I fixed something security-related, the fix was still made upstream first, but just embargoed until the fix was made and released for downstream versions. If I recall correctly, we pushed the upstream fix the day the downstream patch was public.
Now I run CentOS in production for a small web app. I get wanting a decade of support for your OS, but at least for cloud-based web apps that seems pretty unnecessary.
What am I missing here?
Debian packages are trivial to put into a container, and we tried that, but honestly it's not half as nice to work with.
With containers you have to do a ton of extra steps to get functionality and debugging on a level a default debian system provides you.
Additionally the tools to automate the installation and configuration of debian systems are way more mature compared to docker et al.
Containers aren't quite there yet.
Containerization is fantastic don't get me wrong, but I've had more success with old-school approaches to package management, deployment, optimization, debugging, etc. running thin Debian servers. Just... prod ops is easier and more stable at the end of the day. I really don't see the need to containerize everything outside of cross-platform development tooling. I also really prefer having a semblance of an OS/bash terminal when it comes to ops!
Also: this is purely anecdotal. And, to get ahead of the folks yelling "you just don't understand Docker and K8s" - yes I do. I still think they're great, I just am not fully sold on them for every use-case.
It's just not sexy enough for people to write hundreds of posts about how to set up your own package repo, understand unattended-upgrades, and do monitoring.
I think they've been there for 12-14 months. It's no longer a question of "If" but "When" a company decides on its container strategy (and its more than just k8s - see https://blog.coinbase.com/container-technologies-at-coinbase...)
I work for a company that is 100% k8s. Base linux of the containers is Debian 8- but honestly doesn't really matter that much - the OS is more kubectl and the orchestration around k8s (GKE, Prometheus, Sysdig, Grafana, ELK) - the "operating system" has moved up a stack.
We either were using someone else's prebuilt orchestration for something like ELK (insecure, needs constant auditing to be OK) or rolling it ourselves (very expensive in engineer time). None of it was ever working 100% and that was because we were jumping at software packages no one had really taken the time to fully understand. The mentality was "it's containerized!" which many on my team took to mean "we don't need to really grok it, it's in a container!" That burnt us, both on our TIG and ELK stacks. I left that job because it became putting out dumb fires that were not business-justifiable.
All-in-all I'm not saying what anyone is doing is wrong, I'm just saying that if you're going for an orchestrated environment like this you have to have a very mature team. You have to really care about learning these services well, and you have to be careful to not let your own architecture take your time away from solving real problems for the business.
The team I was on did not have that maturity outside of a couple bitter/broken ops guys who didn't deserve what the team had done to them while buzz-word driven leadership gutted their very-proven and stable VMWare infra into a total cluster-f K8s setup because "that's what we're suppose to do in 2018! That's what the new engineers want to work in!"
> the "operating system" has moved up a stack
Splitting hairs: The OS is still the same. The "stack" is newly imposed abstraction on-top of already established paradigms where we are trying to abstract ourselves away from the OS. It's distributed compute more than it is the "OS moving up a stack".
Edit: Ha I think you may have edited your comment with the Coinbase article. That article is actually what I point people to when explaining that K8s isn't some golden bullet, I personally think Coinbase is a great compromise in leveraging containers without going off of the rails (as they write about, ex: talking about the need for dedicated "compute" teams etc).
The rest of the world beg to differ, it's not a question of is it ready or not, it's "Am I going to use it or not".
We're way passed that question.
In 10 years, the pendulum will ice swung a bit back and forth and we‘ll know better what works well. I bet it’s some form of lambda architecture.
Let me say that I am not pro or against docker per se. I just happen to have started my career with a strong team pre-docker and a lot of the docker-enthusiasm isn’t all that much addressing what was lacking in the operations space pre-docker.
By contrast, docker build and docker run are super simple to get started with (at least at a high level, figuring out the right order of flags and options to mount volumes and expose ports can get a little cumbersome). And the docker registry is super simple to browse. It's so easy to get up and running with, and the model it promises of isolation and self-contained dependencies makes a lot of sense which I think is why it has taken off so much. Despite the fact, which I think is what you're pointing out, that there often end up being a lot of pitfalls lurking just around the corner.
Definitely pros/cons to both depending on your situation. I can imagine debian packages being more useful in large scale multi-developer environments.
Now, the tools that build them are a whole different kettle of fish.
Plus some stuff is really old school, ar is an archive format that nobody has used on its own since 1996, it's a sort of transparent bundler/archiver à la tar, I think it was originally used to bundle .so file into a bigger package but still have the symbols inside visible. I don't really remember all the details, it's been a while since I looked into .deb packages.
I think the main problem is that Debian is not a commercial project and it shows sometimes. The tooling is kind of "hidden" (you have to poke around the distribution, mailing lists, etc. to figure things out) and the processes are kind of the same thing. The docs on the site are ok in some regards but they're far from complete and up-to-date.
Meanwhile the Dockerfile format is reasonably well documented and the tooling is also quite straightforward. You can see that a company made it for a while its raison d'être and wanted to make it easy to use.
Having a multi-megabyte docker container to run some random program just seems...wasteful.
You're usually still installing packages and working with a debian/ubuntu/whatever distribution, except they're in a container now.
Docker is an additional abstraction layer one should understand, not a replacement for an existing one.
If you are used to dealing with debian packaging and running stuff directly on your server, using docker can feel like taking a step forward, but also two backwards. A few things are better, but a bunch are worse.
docker
vagrant
VirtualBox (even some scripts to mimic EC2's spawning of machines with VBoxManage)
asdf (really, my fingers didn't slip on the keyboard)
npm
rvm
python's virtualenvs
For updates I just install the new version of the software, then perform a restart (new version starts, once it's ready, old version stops).
/usr/local/thing/versions/thing-v1.3.7
/usr/local/thing/versions/thing-v1.4.2
/usr/local/thing/current -> /usr/local/thing/versions/thing-v1.3.7It was handled in a very hamfisted way. Permitting entities to rebase from C7 -> C8 and then pulling the plug has caused tremendous ill will.
I fully believe this. Like I noted in another comment https://news.ycombinator.com/item?id=25358847, they have done this exact similar thing in the past with the JBoss application server community edition.
It's causing a download extension in my browser to show a download dialog on clicking the link. I don't think it's a bug in the extension, but rather a required behavior because of the limitations of extensions. The extension has no sure way to know what my browser's treatment of the response will be after it inspects the body content to guess the type, so the safer thing to do in its case is to show the dialog.
I don't think Content-Type is required by HTTP, but it's pretty rare for servers to not include it. Might be good to add it if only in interest of the Robustness Principle[1].
I can't even imagine the amount of headache we would have gotten if they needed to upgrade every two years.
Debian, OTOH, has a much longer release cycle, and more of a reputation for moving like molasses, which for better or worse mirrors CentOS a bit more closely.
Fedora is a river, it never stops moving.
If you work in heavily regulated industries, systems migrations and software development can be order-of-years.
Having a mandatory OS upgrade mid-development/deployment is not desirable.
It's a terrible way to develop, but when changes need to be documented in excruciating detail and signed off on by legal... sometimes it's just the way things are.
Package managers are irrelevant. The relevant part is the quality, methodology of testing, existing written policy and technical requirements for the software. That you at the end deliver it in a deb, rpm or what have you, amounts to little.
Debian Stable (with its quality, Debian policy, testing, and stability of package major versions) is closer to CentOS. Fedora would be, as Debian Unstable, too fast moving for CentOS usecases.
Wow, that has not been my experience with containers.
It's fast enough, that some (obv not most) developers don't even set up their local docker environment, and just use the CI/CD deployments to test their work. With the advantage being that if it looks good - single deploy moves all of the production environment to it.
Even the basic ability to just spin a container with all your stuff and start developing is an important advance. (especially for cases like python2/3, java5/8 etc)
Coupled with the remote containers feature of visual studio, working with containers is now a game changer.
Wish we would see more adoption of Freebsd and NixOS in the future.
For now I like my Debian :-)
Looking for both volunteers or people who want to be paid for their efforts. Join us to secure Enterprise Linux as a free (as in beer and freedom) for the foreseeable future.
I would think that would be easier than migrating to a different package management system.
This change to CentOS means it won't have that stability, so some will need alternatives.
Fedora changes even faster than the new CentOS - a new version every six months! Each version is maintained for just over a year, and the maintenance includes new features.
At that scale, hiring your own quality support and running open systems is a drop in the bucket.
If you want the full IBM/Redhat experience, then you can even afford to hire 5+ layers of middle management and PMs between you and your engineers.
"support" doesn't mean reading man pages, it means diagnosing and fixing some intermittent bug in Intel's 10G NIC driver.
Doesn't look dead to me.