Regardless, can we talk about the conduct in this GitHub thread? I know every community is different but is it common to have memes and jokes posted this quickly and often in a GitHub issue? It makes it really hard to follow and discourages genuinely useful discussion of workarounds or progress.
It's certainly acceptable to blow off steam but there are times and places for that. The official issue thread is not one of them. It looked like a Reddit post more so than the official NPM repo. I have nothing against Reddit just pointing out it's a reasonable venue for this sort of commentary (or even HN).
The thread looked like folks celebrating like the power went out and they'll be going home from school for the day instead of "oh crap, this is my job and I have to fix this issue or my livelihood is at risk because I decided to have no backup plans in place for such a situation where NPM is down."
</old-man-moaning>
> Posting memes/jokes in an issue wastes the time of people who actually need to be able to read through the issue.
> Stop it. Use emoji reacts if you feel the need.
In two minutes I have 24 dislikes and 13 likes. I think the introduction of emoji reactions was a bad idea, it gamifies the issues system.
It depends on the community. I've seen many where the reactions are genuine and actually serves a purpose. From what I've seen, its almost always a popular js/node project that attracts this kind if behaviour like seen in this thread (based on my own observations).
Last example is vscode's santa hat[1]. It seems that that thread is mostly empty now but I remember there was huge meme thread going on in that issue or in reddit/hn when that thing was happening.
About all you can do is change the mindset of the community making the comments, which is hard as they're numerous.
1. GitHub presents an interface that encourages Reddit style behaviour.
2. NPM outages are essentially a meme in themselves at this point.
Both of them together create a perfect storm.
However, regarding #1... the interface allows people to post images in their comments on issues. This has a valid, useful reason - for showing screenshots of bugs, for example.
The problem is not the interface. The problem is the people using the interface.
This almost entirely explains the entirety of the javascript ecosystem and the seemingly ever increasing number of, and frequently changing popularity of, the various frontends frameworks. It seems a hype driven ecosystem, so it's not surprising it's also full of people posting memes and such.
I wonder what the reason is for this kind of behavior to exist only in the Javascript community? Could it be that a vast majority of Javascript developers are really in high school? Are there any good stats sources for it?
Actually those animated gifs might have been better than all those "me too", since a text-only post informing about the situation would have stuck out much better.
I am honestly surprised about the number of very upset people in this thread and the trashing of millennials. Yeah the ass of every workplace generational joke- most millennials have kids going to college now.
If you think millennials like memes Wait until you work with some zoomers. A lot of people screaming get off my lawn right now.
25-40?
if something goes bad at work, someone (orally) always makes a joke to defuse the situation. This is the same thing happening here.
Does it though? Sounds like an opinion.
Given that the comments in question add absolutely no value to the issue and instead insert a bunch of visual noise and fog then I'd argue it's not an opinion.
- N developers can't work for X hours
- or the company can't release new versions due to CI dependency on the registry.
- or the registry removes a package you were using
- or the existing package contents changes to something malicious
BUT you pay this price very occasionally and if you're a small shop, the cost is often negligible.
On the other hand, maintaining your own mirror has very real costs even though they can be small. One time setup, hardware, sometimes license or hosted service fee, security upgrades. When there's a sponsor maintaining the central repository, having very good uptime and offering it for free, the marginal utility of a local mirror is quite small.
That’s been my experience anyway with local servers. We’ve had a lot of problems.
Edit: vendoring dependencies, on the other hand, is very reliable. But it doesn’t work well with DVCS.
https://docs.microsoft.com/en-us/azure/devops/artifacts/get-...
- https://open-registry.dev/#get-started
WARNING: Research who runs the mirror before putting your trust in it.
How to turn on:
npm config set registry https://npm.open-registry.dev
How to turn off: npm config delete registryThe result was lightning-fast fetches and no rate limiting.
We needed the two-tiered system because this was Kube and occasionally we would have to rebuild/restart nodes and we didn't want to completely lose the cache when that happened.
You can easily extend this system to handle any package artifacts used in your build process: .deb's, .rpm's, etc.
> I am the engineering manager for the DDoS protection team and this morning at 11:06 UTC we tweaked a rule that affected one of our signals. The signal relates to the HTTP referer header, and we have a piece of code that looks at invalid referer headers. In this case we tweaked it to include not just "obvious garbage" but "anything that does not conform to the HTTP specification"... i.e. is the referer a URI? If not then it contributes to knowledge about bad traffic.
> So... why did this impact npmjs.org? It turns out that a lot of NPM traffic sends the referer as "install" which is invalid according to the HTTP specification. As NPM is also a heavily trafficked site this resulted in the DDoS systems picking this up and treating the traffic as a HTTP flood and determining that a rate-limit should be applied.
> When we noticed that NPM was seeing an increase in HTTP 429s (as seen on Twitter) we contacted NPM and started an internal investigation. As soon as we identified the root cause we reverted the change, which was at 13:00 UTC.
> We'll note that NPM and 1 other site use the referer for purposes outside the HTTP spec and we'll update our systems to ensure that this does not happen again. Additionally we'll improve our monitoring around changes of this nature so that we can discover impact sooner and roll back automatically.
[0] https://github.com/npm/cli/issues/836#issuecomment-587019096
Unfortunately, I am unable to resist the urge to add to the noise by complaining about people complaining.
Whether you use expensive 'turnkey' solutions like Artifactory or keep things simple, there's just a surprising number of ways for a local mirror to go wrong, especially if you depend on it for any kind of third-party dependency compliance control.
Some repository mirrors will also become very large, which means that if you're e.g. running them in a cloud provider the bill can add up. Not really a problem on local hardware but the up-front cost of hardware can be substantial and a lot of startups have little to no in-house IT capability (e.g. the org I work with right now has reached hundreds of employees without having a single system administrator on staff, so as devops person I end up having to do the care and feeding of our recently purchased local hardware as well).
In general I think this is an important and often overlooked issue in modern tech businesses - it is amazing how many technology-centric firms like software startups get to appreciable size relying entirely on outside SaaS/PaaS providers with no real in-house IT operation. This reduces up-front and staffing cost but has a way of coming back to bite you when you hit a certain point. A conversation I've been in before, in reasonably large software outfits, is "we want actual real office phones now, but telephony-as-a-service is real expensive and the on-prem products use scary words like VLAN and QoS in their setup documentation". As someone with an IT rather than software background it's a little baffling to me how this happens, I feel like a combo sysadmin/network engineer would be an early hire. But here I am working for a company instead of running one...
In Travis there's some (not too obvious) caching mechanisms that in many cases avoid this and speed builds up a ton.
I wonder if we would win from a review on popularity of configuring the cache and to educate people further on its use. I'm sure other CI systems have similar capabilities.
1. Eliminate redownload of packages on every CI build 2. Reduce the amount of gigantic IO operations from unpacking the tens-of-thousands of files sitting in node_modules. 3. Better security: code checked in can be audited better if not downloaded every single CI build.
yarn’s PnP system is promising for the zero-install paradigm, but it doesn’t seem quite ready yet (so many packages don’t seem to get their dependencies right).
However, early adopters of npm in the frontend world (back in Browserify and Require.js days) didn't like the practice (notably, because many parts of the dependencies contained node-only code, tests and scripts that were needed for building dependencies, etc.), and started putting node_modules in .gitignore. At the same time, Node people started to use other means to manage dependencies for reproducible builds: namely, private npm registries, dockerfiles, etc.
Over time both frontend and Node communities recognized the need for lockfiles, which we eventually got with Yarn and later versions of npm.
https://blog.isquaredsoftware.com/2017/07/practical-redux-pa...
We had a task every month for one developer to go manually upgrade one or two dependencies and commit the changes after testing (java libraries tend to upgrade much slower than Node).
In early days of node (circa 2011-2013) we used to do the following: 1. run `npm install --ignore-scripts` first. 2. Check the node_modules folder to source control, 3. run `npm install` again - this time without the flag 4. put all extra files generated by install scripts to .gitignore
This way the third-party code (at least, the JS-part of that code) was in the repository, and every developer / server got the version of binaries for their architecture.
It wasn't a bullet-proof, though, since: 1. The scripts could do different things anyway 2. More importantly: one could upload a new version of library to npm with the same version number.
These days, lockfiles and stricter npm publishing rules largely eliminated both issues, and updating dependencies doesn't produce 10k-line diffs in git history anymore.
This idea to redownload all packages all the time from external sources (and not even having a fallback-plan) seems completely brain-dead to me. Didn't the people learn from leftpad-gate?
The thing with using web technology for distribution is that it’s easily accessible and, crucially, that it’s cachable in-line.
Is that like a local cache?
a single failure in any of those centralized systems that we don't pay for and builds fail.
It 100% is, unfortunately. I don't recall this happening in recent history, but it has been the case that 3rd party services have broken CI/CD pipelines and production pushes (e.g pip broke a few weeks ago, and their own pipeline for deploying changes was blocked by the bug).
One day this is going to happen for real and it will be because npm org decided to charge for API requests by `npm ci`.
> Investigating - We are currently investigating 403 / 429 errors received by some users.
> Monitoring - Our content delivery partner has informed us they've implemented a fix. We are monitoring. Feb 17, 13:07 UTC
Edit: And in the OP GH thread, people are also mentioning it's back up.
[a] https://blog.sonatype.com/using-nexus-3-as-your-repository-p...
Includes a built-in package manager for resource fetching, thus no need for NPM.
It really doesn't make a lot of sense to re-download tons of packages for every minor commit where you run your CI.