Another advantage is that because they're so minimal and self-contained, they're often "completed", because they achieved what they set out to do. So there's no need to continually patch it for security updates, or at least you need to do it less often, and it's less likely that you'll be dealing with breaking changes.
The UNIX philosophy is also build on the idea of small programs, just like micro-libraries, of doing one thing and one thing well, and composing those things to make larger things.
I would argue the problem is how dependencies in general are added to projects, which the blog author pointed out with left-pad. Copy-paste works, but I would argue the best way is to fork the libraries and add submodules to your project. Then if you want to pull a new version of the library, you can update the fork and review the changes. It's an explicit approach to managing it that can prevent a lot of pitfalls like malicious actors, breaking changes leading to bugs, etc.
In JS and NPM they are a plague, because they promise to be a substitute for competence in basic programming theory, competence in JS, gaps and bad APIs inside JS, and de-facto standards in the programming community like the oldest operating functions in libc.
There are a lot of ways for padding a number in JS and a decent dev would keep an own utility library or hell a function to copy-paste for that. But no. npm users are taught to fire and forget, and update everything, no concept of vendoring (that would have made incidents like left-pad, faker and colors less maddening, while vendoring is even bolt in npm and it's very good!). They for years copy-pasted in the wrong window, really, they should copypaste blocks of code and not npm commands. And God helps you if you type out your npm commands because bad actors have bought the trend and made millions of libraries with a hundred different scams waiting for fat fingers.
By understanding that JS in the backend is optimizing for reducing cost whatever the price, becoming Smalltalk for the browser and for PHP devs, you would expect some kind of standard to emerge for having a single way to do routine stuff. Instead in JS-world you get TypeScript, and in a future maybe WASM. JS is just doomed. Like, we are doomed if JS isn't, to be honest.
I'm also not arguing against a large popular project with a lot of contributors if it's made up of a lot of small, modular, self-contained code that's composed together and customizable. All the smaller tools will probably work seamlessly together. I think UNIX still operates under this sort of model (the BSDs).
There's a lot of code duplication and bad code out there, and way too much software that you can't really modify easily or customize very well for your use case because it becomes an afterthought. Even if you did learn a larger codebase, if it's not made up of smaller modular parts, then whatever you modify has a significantly higher chance of not working once the library gets updated, because it's not modular, and you updated internal code, and the library authors aren't going to worry about breaking changes for someone who's maintaining a fork of their library that changes internal code.
I have distressing news about my experience using Linux in the '90s
Regardless of how supposedly good or small is the library, the frequency at which you need to check for updates is the same. It doesn’t have anything to do with the perceived or original quality of the code. Every 3rd party library has at least the dependency on platform and platforms are big, they have vulnerabilities and introduce breaking changes. Then there’s a question of trust and consistency of your delivery process. You won’t adapt your routines based on specifics of every tiny piece of 3rd party code, so you probably check for updates regularly and for everything at once. Then their size is no longer an advantage.
> Copy-paste works, but I would argue the best way is to fork the libraries and add submodules to your project. Then if you want to pull a new version of the library, you can update the fork and review the changes.
This sounds “theoretical” and is not going to work at scale. You cannot seriously expect application level developers to understand low level details of every dependency they want to use. For a meaningful code review of merges they must be domain experts, otherwise effectiveness of such approach will be very low - they will inevitably have to trust the authors and just merge without going into details.
When's the last time ls, cat, date, tar, etc needed to be updated on your linux system? probably almost never. And composing them together always works. This set of linux tools, call it sbase, ubase, plan9 tools, etc, is one version of a metapackage. How often does a very large package need to be updated for bug fixes, security patches, or new versions?
Submodules can work too, but do you really need these extra lines in your build scripts, extra files and directories, and the import lines just for a five line function? Copy-pasting is much simpler, with maybe a comment referring to the original source.
Note: there may be some legal reasons for keeping "micro-libraries" separate, or for not using them at all though but IANAL as they say.
If you want the same functionality, build it according to the conventions in the codebase and strip out everything else that isn't required for the exact use case (since it's not a library anymore)
The Unix philosophy is also built on willful neglect of systems thinking. The complexity of system isn't in the complexity of its parts but in the complexity of the interaction of its parts.
Putting ten micro-libraries together, even if each is simple, doesn't mean you have a simple program, in fact it doesn't even mean you have a working program, because that depends entirely on how your libraries play together. When you implement the content of micro-libraries yourself you have to be at the very least conscious not just of what, but how your code works, and that's a good first defense against putting parts together that don't fit.
They have small programs, but that are not of different project. For example all the basic Linux utilities are developed and distributed as part of the GNU coreutils package.
It's the same of having a modular library, with multiple functions in them, that you can choose from. In fact the problem is that these function like isNumber shouldn't even be libraries, but should be in the language standard library itself.
But you need the functionality anyway, so there are two dependencies: on your own code, or on someone else's code. But you can't avoid a dependency, and it comes at a cost.
If you don't know how to code the functionality, or it will take too much time, a library is an outcome. But if you need leftPad or isNumber as an external dependency, that's so far in the other direction, it's practically a sign of incompentence.
Could you for laughs explain for which cases these are, why they are needed and why they did it this way?
1) num-num === 0
2) num.trim() !== ''
3) Number.isFinite(+num)
4) isFinite(+num)
5) return false;
6) Why this specific order of testing? Why prefer Number.isFinite over isFinite?
https://www.npmjs.com/package/is-number
module.exports = function(num) {
if (typeof num === 'number') {
return num - num === 0;
}
if (typeof num === 'string' && num.trim() !== '') {
return Number.isFinite ? Number.isFinite(+num) : isFinite(+num);
}
return false;
};
I would have just.... isNumber = num => isFinite(num+''.trim());
Why is that not precisely the same? (it isn't)how about...
function isNumber(num){
switch(typeof num){
case "number" : return !isNaN(num);
case "string" : return isFinite(num) && !!num.trim();
}
}
Is there a difference?IMHO NPM should have a discussion page for this. There are probably interesting answers for all of those looking to copy and paste.
This year I started learning FORTH, and it's very much this philosophy. To build a building, you don't start with a three-story slab of marble. You start with a hundreds of perfect little bricks, and fit them together.
If you come from a technical ecosystem outside the Unix paradigm, it can be hard to grasp.
Yeah, it's all concatenative programming: FORTH, unix pipes, function composition as monoids, effect composition as kliesli composition and monads, etc.
It makes it super useful for code readability (once you're familiar with the paradigm), and debugging, since you can split up and decompose any parts of your program to inspect and test those in isolation.
These are tiny programs.
I mean, sort has put on some weight over the years, sure. But if it were packaged up for npm people would call it a micro-library and tell you to just copy it into your own code.
If you understand what is going on, paste it into your tree.
Well I think that is the point, they're not self-contained. You are adding mystery stuff and who knows how deep the chain of dependencies go. See the left-pad fiasco that broke so much stuff, because the chain of transitive dependencies ran deep and wide.
NPM is a dumpster fire in this regard. I try to avoid it - is there a flag you can set to say "no downstream dependencies" or something when you add a dependency? At least that way you can be sure things really are self-contained.
yarn add <path/to/your/forked/micro-library.git>
pnpm add <path/to/your/forked/micro-library.git>
Forking the code and using that is arguably nicer though IMO, makes it easier pull in new updates from the code, and to be able to track changes and bug fixes easier. I've tried both and find this approach nicer overall.
Mirco dependencies are a god damn nuisance, especially with all the transitive micro-dependencies that come along, often with different versions, alternative implementations, etc.
I haven't done anything with this myself (just brainstormed a bit with chatgpt) but I wonder if the solution is https://docs.npmjs.com/cli/v10/commands/npm-ci
Basically, enforce that all libraries have lock files and when you install a dependency use the exact versions it shipped with.
Edit: Can someone clarify why this doesn't work? Wouldn't it make installing node packages work the same way as it does in python, ruby, and other languages?
You could say that if all the popular web frameworks in use today were rewritten to import and use hundreds of thousands of pico-libraries, their codebase would be, as you say, composed of many high modular, self contained pieces that are easy to understand.
/s
To reformulate the statement made in the intro of this post: "maybe it’s not a great idea to outsource _any critical_ functionality to random people on the internet."
It has long been a standard, best practice in software engineering to ensure dependencies are stored in and made available from first-party sources. For example, this could mean maintaining an internal registry mirror that permanently stores any dependencies that are fetched. It could also be done by vendoring dependencies. The main point is to take proactive steps to ensure your dependencies will always be there when you need them, and to not blindly trust a third-party to always be there to give your dependencies to you.
Well everything is critical in the sense that a syntax error could break many builds and CI systems.
This is what lock files are for. If used properly, and the registry is available, there are no massive issues. This is how things are supposed work – all the tooling is made this way.
In short, I think the lessons from the leftpad debacle are (1) people don’t use existing versioning tooling, (2) there is a surprising amount of vendors involved if you look at dep trees for completely normal functionality and (3) the JS ecosystem is particularly fragmented with poor API discipline and non-existent stdlib.
EDIT: Just read up on it again and I misremembered. The author removed leftpad from NPM due to a dispute with the company regarding an unrelated package. That’s more of a mismanaged registry situation. You can’t mutate and remove published code without breaking things. Thus NPM wasn’t a good steward of their registry. If there’s a need to unpublish or mutate anything, there needs to be leeway and a path to migrate.
If you're particularly unlucky, the unused functionality pulls in transitive dependencies of its own - and you end up with libraries in your dependency tree that your code is literally not using at all.
If you're even more unlucky, those "dead code" libraries will install their own event handlers or timers during load or will be picked up by some framework autodiscovery mechanism - and will actually execute some code at runtime, just not any code that provides anything useful to the project. I think an apt name for this would be "undead code". (The examples I have seem were from java frameworks like Spring and from webapps with too many autowired request filters, so I do hope that is no such an issue in JS yet)
Indeed. Several toy projects I've done were blown up in size by four orders of magnitude because of Numpy.
I only want multi-dimensional arrays that support reshaping and basic element-wise arithmetic, maybe matrix multiplication; I'm not even that concerned about performance.
But I have to pay for countless numerical algorithms I've never even heard of provided by decades-old C and/or FORTRAN projects, plus even more higher-math concepts implemented in Python, Numpy's extensive (and fragmented - there's even compiled code for testing that's outside of any test folders) test suite that I'll never run myself, a bunch of backwards-compatibility hacks completely irrelevant to my use case, a python-to-fortran interface wrapper generator, a vendored copy of distutils even in the wheel, over 3MiB of .so files for random number generators, a bunch of C header files...
[Edit: ... and if I distribute an application, my users have to pay for all of that, too. They won't use those pieces either; and the likelihood that they can install my application into a venv that already includes NumPy is pretty low.]
I know it's fashionable to complain about dependency hell, but modularity really is a good thing. By my estimates, the total bandwidth used daily to download copies of NumPy from PyPI is on par with that used to stream the Baby Shark video from YouTube - assuming it's always viewed in 1080p. (Sources: yt-dlp info for file size; History for the Wikipedia article on most popular YouTube videos; pypistats.org for package download counts; the wheel I downloaded.)
I just refactored a bunch of python computer vision code that used detectron2 and yolo (both of which indirectly use OpenCV and PyTorch and lots of other stuff), and in the process of cleaning up unused code, I threw out the old imports of the yolo modules that we weren't using any more.
The yololess refactored code, which really didn't have any changes that should measurably affect the speed, ran a mortifying 10% slower, and I could not for the life of me figure out why!
Benchmarking and comparing each version showed that the yololess version was spending a huge amount of time with multiple threads fighting over locks, which the yoloful code wasn't doing.
But I hadn't changed anything relating to threads or locks in the refactoring -- I had just rearranged a few of the deck chairs on the Titanic and removed the unused yolo import, which seemed like a perfectly safe innocuous thing to do.
Finally after questioning all of my implicit assumptions and running some really fundamental sanity checks and reality tests, I discovered that the 10% slow-down in detectron2 was caused by NOT importing the yolo module that we were not actually using.
So I went over the yolo code I was originally importing line by line, and finally ran across a helpfully commented top-level call to fix an obscure performance problem:
https://github.com/ultralytics/yolov5/blob/master/utils/gene...
cv2.setNumThreads(0) # prevent OpenCV from multithreading (incompatible with PyTorch DataLoader)
Even though we weren't actually using yolo, just importing it, executing that one line of code fixed a terrible multithreading performance problem with OpenCV and PyTorch DataLoader fighting behind the scenes over locks, even if you never called yolo itself.So I copied that magical incantation into my own detectron2 initialization function (not as top level code that got executed on import of course), wrote some triumphantly snarky comments to explain why I was doing that, and the performance problems went away!
The regression wasn't yolo's or detectron2's fault per se, just an obscure invisible interaction of other modules they were both using, but yolo shouldn't have been doing anything globally systemic like that immediately when you import it without actually initializing it.
But then I would have never discovered a simple way to speed up detectron2 by 10%!
So if you're using detectron2 without also importing yolo, make sure you set the number of cv2 threads to zero or you'll be wasting a lot of money.
- Documentation: they are usually well documented, at least a lot better than your average internal piece of code.
- Portability: you learn it once and can use it in many projects, a lot easier than potentially copy/pasting a bunch of files from project to project (I used to do that and ugh what a nightmare it became!).
- Semi-standard: everyone in the team is on the same page about how something works. This works on top of the previous two TBF, but is distinct as well e.g. if you use Axios, 50% of front-end devs will already know how to use it (edit: removed express since it's arguably not micro though).
- Plugins: now with a single "source" other parties or yourself can also write plugins that will work well together. You don't need to do it all yourself.
- Bugs! When there are bugs, now you have two distinct "entities" that have strong motivation to fix the bugs: you+your company, and the dev/company supporting the project. Linus's eyeballs and all (yes, this has a negative side, but those are also covered in the cons in the article already!).
- Bugs 2: when you happen upon a bug, a 3rd party might've already found a bug and fixed it or offered an alternative solution! In fact I just did that today [1]
That said, I do have some projects where I explicitly recommend to copy/paste the code straight into your project, e.g. https://www.npmjs.com/package/nocolor (you can still install it though).
[1] https://github.com/umami-software/node/issues/1#issuecomment...
Copy-paste the code into your internal library and maintain it yourself. Don't add a dependency on { "assert": "2.1.0" }. It probably doesn't do what you actually want, anyway.
I think the more interesting point is that most projects don't know what they actually need and the code is disposable. In that scenario micro-libraries make some amount of sense. Just import random code and see how far you can get.
[1] I lied, I don't even run npm publish, I made my own tool for easy publishing so I just run `happy "Fixed X bug" --patch`
I would prefer them to be built straight in the languages.
I fail to comprehend how a single-function-library called "isNumber" even needs updating, much less "fairly frequently".
The debate around third-party code vs. self-developed is eternal. IMHO if you think you can do better than existing solutions for your use-case, then self-developed is the obvious choice. If you don't, then use third-party. This of course says a lot about those who need to rely on trivial libraries.
If someone uses isNumber as a fundamental building block and surrogate for Elm or Typescript (a transpiler intermediate that would treat number more soundly I hope), this poor soul whom I deeply pity will encounter a lot of strange edge-cases (like that one stated in the article: NaN is a number or not?) and if they fear the burden of forking the library they will try to inflict this burden upstream, enabling feature or conf bloat.
I insinuate that installation of isNumber is, like most of these basic microlibs, a symptom of incompetence in usage of the language. A worn JS dev would try isNaN(parseInt(num+'')) and sometime succeed.
Nothing is ever certain when you program in javascript.
Never underestimate the complexity and footgunny nature of JS' type system.
surely it can't be beyond the wit of programming kind to have a standard lib, or even layers of standard lib for Node?
What is the argument for not having a standard lib, apart from download speed?
When you put something in the standard library, it's harder to take it out, meaning that you're committing development resources to support the implementation. Furthermore things change: protocols and formats rise and fall in popularity and programming style evolves as the language changes (e.g. callbacks vs. promises in JS). Therefore the stdlib becomes where libraries go to die, and you'll always have a set of third party libraries that are "pseudo-standard", like NumPy in Python.
Having a minimal stdlib lets you "free-market" the decision, letting the community effects take care of what is considered standard in the ecosystem, and lets you optimize its minimal surface, like what happened with C.
I sometimes hanker for a return to Fortran IV where every routine was separately compiled and the linker only put into the object code those that were referred to by something else.
This can lead to the occasional rude surprise when finally reaching code I've been working on for awhile, but haven't yet connected to the rest of the project. But it means there's no need for tree shaking, because nothing gets in until it gets used. One of my favorite things about the language.
I moved to option 3: in all my apps I include a function library that I build over the years, so I don't start from scratch every time. I deeply hate ("hate speech" example here) dependencies to libraries from all over the Internet due to security reasons, but I copy-paste code when needed to my library after I read, understand and check the code that I copy. The biggest advantage is that some of this code is better than what I could invent from scratch in a busy day and I save the time of doing it right. The disadvantage is there is no way to reward these authors that contribute to humankind.
PS. My function library has functions mostly written by me, over 80%, but it includes code written by others. In my case, every time I need a function I check my existing library first, then analyze whether to write or copy.
This doesn't apply to micro-libraries, but it looks like that cost/benefit list is intended to cover libraries in general.
I guess the opinion I'll share here is that I don't hear too many people arguing that the way embedded developers manage C libraries is at the forefront of how we should be handling and distributing code.
There's a good reason for it -- code quality and performance is important. Understanding what your code actually does and how it does it, is important.
And as a result, AAA game software is able to push the boundaries of consumer compute performance. Micro-library-built software is generally just bloated crap that barely works. But it is fast to churn out, so there's that.
Well, that's a proper use of SemVer, not sure why you put it against the library's author. I've personally been burned enough times by libraries that for some reason think that literally being unable to compile them is somehow a backwards-compatible change, so it's refreshing to see that some people actually understand that.
Normally, packages are listed in my composer.json and stored in vendor/. For those packages, I created a separate folder called vendor_private/ which is part of my Git tree, put copies of these weird little packages in it, and set up my composer.json to consider that folder a repository.
Works like a charm. My big important packages are still upstream. I can customize the little ones as needed to fit better, or have better code, and not worry about them going unmaintained. It’s also way quicker than copying the files individually out of the package and into the right places (along with updating Namespaces, configuration, etc.) Once in a while, I’ll go back and see if anything worthwhile has changed upstream - and so far, it never has.
I'm also an advocate, against crowd, of qualified imports as they help with refactoring (renames are propagated, especially in monorepos), readability/reviews (functions are qualified, you know where they're coming from) and overall coding experience – qualified module name followed by dot gives good autocompletion, imports look neat in larger projects etc. The codebase written like this resembles extended standard library. It also helps with solving problems by encouraging first principle thinking, bottom up coding that produces auditable codebase with shallow external dependencies etc.
Using SNS as an example when it's neither micro nor a library but a service (and a huge abstraction over native push notifications, whereas most micro-libraries provide simple utilities that aren't very abstract), saying that complex libraries are harder to audit and hence a security risk (which should be a point in favor of micro-libraries that are small enough to audit in minutes), saying libraries might have large footprints (which is surely another reason to go for micro-libraries over all-you-could-possibly-need-libraries), saying transitive dependencies are bad, (yet again, this points towards an advantage of micro-libraries, which are less likely to have many dependencies), ... I don't know.
"Would future updates be useful? No. The library is so simple that any change to the logic would be breaking, and it is already clear that there are no bugs."
Maybe what you want is a library ecosystem where things can be marked "this will never change". Something crazy happens and you actually need to update "is-number"? Rename it.
Of course, you can simulate that with a single large omnibus dependency that everyone can trust that pulls all these silly micro-libraries in verbatim.
Indeed you can, but it depends what isNumber does. This is more like what it should do IMO:
function isNumber( foo ) { return ( (typeof foo === "number") && (foo == foo)) || ((typeof foo === 'object') && (foo instanceof Number) ); }
And that is I think the value of micro libs, at least in JS, you don't want to think about all the edge cases when you only want to check if something is a Number.
But the broader point is, you can't outsource understanding to a package. There will be places in your code where NaN is a perfectly valid number, or Infinity. And other places where you absolutely need to be sure neither of the above make their way in.
By pretending that a package can capture the universal essence of "numberless", and that this will broadly apply across the entire JS ecosystem (see reported benefits like "different libraries can all rely on is-number instead of rewriting duplicated helper functions!") is naive.
I wrote more about this in a post linked in a top level comment. The is-promise library is another great example.
* Personal pet theory is that the package author would have been embarrassed to publish a 1-line package, so included "numeric strings are numbers" as a fig leaf to justify the package's existence. They should have instead created two new packages, is-actual-number and is-numeric-string, so the implementation of is-number could be nice and clean:
module.exports = function(n) { return require('is-actual-number')(n) || require('is-numeric-string')(n); }
I can feel the power of webscale coursing through meIn any case this is a bad example because Typescript exists.
I only tried to highlight some edge cases that I personally don't like to spend energy on, trying to get it right, when writing code. Btw, isNumber is a dynamic call in the example and unrelated to TypeScript. TypeScript doesn't exist at runtime.
(At this point Nodejs is the defacto tooling ecosystem for even JS destined to run in a browser. You can't separate the two.)
If you don't need anything else, and having the linker not include unused code isn't good enough, then just vendor the single function.
There could still be some special case but that will need a lot of explaining to justify and will be such an exception that it is silly to talk about. There are legitimate one time freak exceptions to every principle. It means nothing.
However, if I can inline a small function, I will, so in that sense I agree.
This is profoundly true. JavaScript written for the frontend has different "physics" to backend code.
It's not only code size that is significant. It's the fact that when you ship code over the wire to a client, you don't know what browser or even JS engine version will be interpreting it. Platform incompatibility has been a huge driver of issues in the JS/NPM ecosystem and has caused JS's culture to develop the way it has.
I wrote more about this, link in a top level comment.
And if the LLM ain't good enough to write leftpad, how can I trust it to write anything at all?
Furthermore, that's not even the main contention I was highlighting. Without a proper definition, the advice of "just copy/paste these" is dangerous. Someone will draw their line at something too large, copy/paste that in and inherit bugs/vulnerabilities they never fix. That's a big problem.
Perhaps we should start there.
Obviously you want basic, stable and well documented functionality in your programming language.
But JavaScript does simply not have it. So how do you solve this dilemma?
1) the everything is an import way: use NPM and create a dependency hell from hell (requires Satan) made by Lucifer (same as Satan but different) using lava with fire (requires node v <= 9.42.0815) and heat (deprecated) requiring brimstone (only node v > 10.23) with a cyclic dependency on the Devil (incompatible with Satan).
2) the Golang way: copy paste ALL the things, only for your co worker to copy paste all the things again, only for your co worker to copy paste all the tings again, only for your...
Way 1 wastes your time when it breaks (sooner than later) but is necessary for non trivial functionality. Way 2 works only for trivial packages so choose your poison.
JavaScript (apart from not being a good programming language in general) is sorely missing a std lib.
One could argue that having a bad std lib is even even worse (PHP anyone?) but it is really hard to decide.
Sadly JavaScript is just unfit for the purpose it is being used for.
Applications should never have trivial, tiny libraries as moving-target external dependencies.
If you must use a small library, bring it into the program.
The advantage: - everybody can contribute an npm package
The disadvantage: - everybody can contribute an npm package
Passive voice. WHO should never use micro-libraries?
How is this the fault of the library? You chose the wrong one!
"This often cancels out the primary benefit of libraries. No, you don’t have to write the code, but you do have to adapt your problem to fit the library"
You evaluated the library, found is unsuitable and yet, it is somehow their fault.
Why on earth would you project your own failures on to someone else's code? You do you!
> I have talked a lot about the costs of libraries, and I do hope people are more cautious about them. But there’s one factor I left out from my previous discussion. I think there’s one more reason why people use libraries: fear.
> Programmers are afraid of causing bugs. Afraid of making mistakes. Afraid of missing edge cases. Afraid that they won’t be able to understand how things work. In their fear they fall back on libraries. “Thank goodness someone else has solved the problem; surely I never would have been able to.”
I think this is true, but why does the JS ecosystem seem to have "more fear" than for example the Python ecosystem?
I wrote about this a while ago. I think that actually JS does (or did) cause more fear in its developers than other programming languages. I described it as paranoia, a more insidious uncertainty.
Quoting myself[1]:
> There are probably many contributing factors that have shaped NPM into what it is today. However, I assert that the underlying reason for the bizarre profusion of tiny, absurd-seeming one-liner packages on NPM is paranoia, caused by a unique combination of factors.
> Three factors have caused a widespread cultural paranoia among JavaScript developers. This has been inculcated over years. These factors are: JavaScript's weak dynamic type system; the diversity of runtimes JavaScript targets; and the physics of deploying software on the web.
...
> Over the years there has been rapid evolution in both frontend frameworks and backend JavaScript, high turnover in bundlers and best-practises. This has metastasized into a culture of uncertainty, an air of paranoia, and an extreme profusion of small packages. Reinventing the wheel can sometimes be good - but would you really bother doing it if you had to learn all the arcane bullshit of browser evolution, IE8 compatibility, implementation bugs, etc. ad infinitum?
> And it's not just that you don't understand how things work now, or how they used to work - but that they'll change in the future!
[1] https://listed.to/@crabmusket/14061/javascript-s-ecosystem-i...
Certainly the language is quirky, but it really doesn't change that much. Frameworks have come and gone but JavaScript itself is still the same. is-number would have looked much the same 15 years ago, if anyone was crazy enough to actually distribute it.
No, it's much more mundane: "Thank goodness someone else has solved the problem because I surely as hell don't want to solve it myself because I don't have either time or brain power/will/motivation for that". What is a number is JS? I don't even want to start thinking about it, just give me an isNumber() function. Why is it not in the standard libary in the first place?
Perhaps it's because so many JS developers - quite rightfully - suffer from impostor syndrome?
It's the language with the largest proportion of people who didn't set out to be programmers but somehow got mission-crept into becoming one.