- It has a bus/lottery factor of 1. The vast majority of all the changes were done by Araq and I have very little faith that the language would survive without him. This is even more pronounced with Zig (mentioned in comments here).
- It has had some very embarrassing bugs after the 1.0 milestone. Most of them were specific to Windows (e.g. [0]), which casts a lot of doubt on its cross-platform promise. Multiple times in the last year, when debugging a nim program, it turned out that the problem was in the language/standard library.
Now, these might not be reasons enough to not use nim, since it's a lovely language when it works, but a pro/con list should be honest.
And with respect to the ecosystem-at-large, there are tens of contributors and a very healthy package repository: nimble (package manager) written and maintained by dom96; arraymancer (tensor+array+nn) written and maintained by mratsim; an up-and-coming thread runtime by mratsim (called weave) which is better than just about any existing thread runtime for any language. NimPy for seamless python integration (... which produces one DLL that works with every Python version; can your C++ do that?) by yglukov, and many more.
And most libraries you'd need already have a Nim wrapper, (and one is extremely easy to generate if not), though the pure-nim body is growing every day - have a look at https://nimble.directory/
What was Rusts early years like? Was it one developer for the first part?
I'd imagine this is not a big deal in the early days, where the benevolent dictator is as much the language as the project itself, not all technology adoption happens on the same timelines. Matz with Ruby took a long time to become super popular, Rich Hickey with Clojure seemed to be a powerhouse even as that found quick adoption before stalling.
When Rust started as a hobby project it was a one man effort, but it was also a project with ~1 user. It grew developers before actually growing users, and for a while, it had more developers than users.
In the very beginning, it was a one-man project, but after some time it was picked-up by Mozilla research as an official research project, with several developers working on it (brson and pcwalton in addition to the language creator) and they also started a research new browser (in partnership with Samsung) using this experimental language. That's when people started to hear about Rust (and it was still very far from 1.0 at this point).
Oh. That really surprised me, as I had assumed the bugbears I have as an occasional nim user were because it was developed for/on Windows primarily. Actually bothering to take a look seems to show me that isn't the case at all.
Bugbears such as the linking story on Linux¹, the argument handling², the style and verbosity of the compiler output, [a bunch of others]. Nothing show stopping to be fair, but a bunch of things that just seem out of place(and that always seem to require explanation when co-workers see a nim tool).
1. https://github.com/nim-lang/rfcs/issues/58
2. https://nim-lang.org/docs/parseopt.html , although alleviated by argparse to some extent.
1. Arraymancer https://github.com/mratsim/Arraymancer
2. The new version of the garbage collector understands move semantics to optimize its reference counting so unlike Rust where you have to deal with it yourself Nim will handle it for you at the expense of a reasonable amount of memory. https://youtu.be/yA32Wxl59wo?t=855 Watch the whole video for more context if you care.
I think it's a really nice language, too many pragmas but still really nice.
I really think that the folks behind Nim need to focus on getting some killer apps in the ecosystem and in marketing them. That's all Nim needs. Just some useful tools wrapped in a nice package.
Anyone out there who has used both and has more observations?
For numerical computing however Julia wins hands own purely based on the community and libraries.
I have another use-case; first-class (easier-than-others?) cross-compilation support.
I had to build a one-off tool that was a a glorified "curl wrapper" with validations, for Windows; on a Mac. Wrote a simple nim script, cross-compiled for Windows, and it's been fine and dandy for a year now :)
I'm sure other languages support this (golang?), but Google's SEO suggested nim-lang.org
Carp also has open pull request for using zig as a cross compiler FWIW
Also, I'm with Araq on this one -- in my experience, every time I reached for a cyclic-cross-file-type-declaration, there was a much simpler acyclic solution I found later.
I had forgotten how irritating creating header like declaration files were till I tried Nim.
I expect the developers to know this, since its accessible only though the `$ nim secret` command.
This feature is meant to make linking easier but it shouldn't have leaked into variability of a single module's sources.
Now you can learn a bit about Nim and have fun doing it :)
A couple of links:
https://docs.google.com/document/d/1eYRTd6vzk7OrLpr2zlwnUk7m...
In short, while Go has deliberately shunned all modern developments in PL design, Nim has embraced them. Also Nim has real macros, while Go does not.
It's clear to me that though immature, Nim is a much better and more expressive language than Go.
Therefore, I think the two languages appeal to different sets of people.
[0] https://commandcenter.blogspot.com/2012/06/less-is-exponenti...
Also it's faster than Go on most benchmarks.
This is no small thing. The expressiveness of a language makes the difference between happy productivity and tedious typing and swearing.
Nim is in the same ballpark as Python here, while many other languages require a lot of boilerplate.
After adding a million requested features it will end up bloated like all the others . . .
[EDIT] recognized your nick from IRC a while back, my "good idea" is still to add support for 80-bit floats ;)
Using "no memory leaks" and "reference counting" in the same sentence is #fakenews. Reference counting leaks cycles unless accompanied with a tracing GC (at which point reference counting makes little sense).
Python proves otherwise. Reference counting gives you deterministic memory use and finalization except when a cycle is involved. The tracing GC helps for those cases (and libraries) that do introduce cycles.
If each one of your objects is in a cycle, then -- yes, reference counting makes no sense. If only 1% of your objects are in a cycle, it makes 99% sense.
So anyone can dress like Moses, come down off the mountain with tablets, and we'll debate the scriptures without considering the provenance? Good to know. California ballot initiatives often work that way.