A cooldown is a good idea, though.
Certainly, having a regular/automated update schedule may take less clock time in total (due to preserved knowledge etc.), and incur less long-term risk, than deferring updates until a giant, risky multi-version multi-dependency bump months or years down the road.
But if you have limited engineering resources (especially for a bootstrapped or cost-conscious company), or if the risks of outages now are much greater than the risks of outages later (say, once you're 5 years in and have much broader knowledge on your engineering team), then the calculus may very well shift towards freezing now, upgrading later.
And in a world where supply chain attacks will get far more subtle than Shai-Hulud, especially with AI-generated payloads that can evolve as worms spread to avoid detection, and may not require build-time scripting but defer their behavior to when called by your code - macro-level slowness isn't necessarily a bad thing.
(It should go without saying that if you choose to freeze things, you should subscribe to security notification services that can tell you when a security update does release for a core server-side library, particularly for things like SQL injection vulnerabilities, and that your team needs the discipline to prioritize these alerts.)
Also, it keeps you in touch with your deps so you can consider if it’s even worth it. My favorite update was removing the dep (or starting a plan to remove it because it was interfering with regular updates)
This is only true if you install dependencies that break backwards compatibility.
Personally, I avoid this as much as possible.
Indeed there are people doing that and communities with a consensus such approach makes sense, or at least is not frowned upon. (Hi, Gophers)
Problem is code bases are continuously evolving. A safe decision now, might not be a safe decision in the future. It's very easy to accidentally introduce a new code path that does make you vulnerable.
This practice needs to change, although it will be almost impossible to get a whole ecosystem to adopt. You shouldn’t have to take new features (and associated new problems) just to get bug fixes and security updates. They should be offered in parallel. We need to get comfortable again with parallel maintenance branches for each major feature branch, and comfortable with backporting fixes to older releases.
For open source, well these are volunteer projects on my own time, you are always welcome to fork a given version and backport any fixes that land on main/master.
For commercial libs, our users are not willing to pay extra for this service, so we don't provide it. They would rather stay on an old version and update the entire code base at given intervals. Even when we do release patch versions, there is surprisingly little uptake.
- it usually contains improvements to security
- except when it quietly introduces security defects which are discovered months later, often in a major rev bump
- but every once in a while it degrades security spectacularly and immediately, published as a minor rev
We had so many distinct packages on my last project that I had to massively upgrade a tool a coworker started to track the dependency tree so people stopped being afraid of the release process.
I could not think of any way to make lock files not be the absolute worst thing about our entire dev and release process, so the handful of deployables had a lockfile each that was only utilized to do hotfix releases without changing the dep tree out from underneath us. Artifactory helps only a little here.
Also, some software are always buggy and every version is a mixed bag of new features, bugs and regressions. It could be due to the complexity of the problem the software is trying to solve, or because it's just not written well.