We (Beamdog) are using nim in production for Neverwinter Nights: Enhanced Edition, for the serverside parts of running the multiplayer infra.
nim is uniquely good in providing an immense amount of value for very little effort. It gets _out of my way_ and makes it very easy to write a lot of code that mostly works really well, without having given me any serious traps and pits to fall into. No memleaks, no spurious crashes, no side-effect oopses, anything like that. It's C/++ interop has been a huge enabler for feature growth as well, as we can partly link in game code and it works fine. For example, our platform integrates seamlessly with native/openssl/dtls for game console connectivity. And it all works, and does so with good performance. It is all now a set of quite a few moving components (a message bus, various network terminators, TURN relays, state management, logging and metrics, a simple json api consumed both by game clients and web (https://nwn.beamdog.net), ...).
We're still lagging behind and are on 1.0.8, but that is totally fine. It's tested and works, and there's no real incentive to move to 1.2 or 1.4 - yet!
Usage for our game has expanded to provide a few open source supporting utilities (https://github.com/Beamdog/nwsync) and libraries (https://github.com/niv/neverwinter.nim/) too. The good part about those is that they are cross-platform as well, and we can provide one-click binaries for users.
OTOH, There's been a few rough edges and some issues along the way. Some platform snafus come to mind, but those have been early days - 0.17, etc. Some strange async bugs had been found and fixed quickly though.
Good and bad, at least for me, nim has been a real joy to work with. If I had the chance to message my 3-years-younger, I'd just say "yea, keep going with that plan", as it turned out to save us a lot of development time. I suspect the features we've put in wouldn't have been possible in the timeframe we had, if it would have been all written in, say, C++.
May I ask, did you consider Go and decided against it for any reason considering your requirements of quick development, cross-platform, interoperability are all guaranteed features of Go which should have given better peace of mind considering a production application?
I'm honestly surprised Nim is not the secret weapon of many start-ups. Nim is much more "open architecture" instead of pushing some "single, canned turn-key fits most users" solutions.
Having a juggernaut like Google marketing/pushing/subsidizing something might provide false as well as true peaces of mind. :-) { Such peace is a multi-dimensional concept. :-) }
The PoC was incredibly quick to manifest, and iterating on it had quickly proven itself as a good way forward.
Peace of mind was a judgment call. Despite being a rather sizeable project now, what we had back then was already very stable and reliable (even under heavy benchmark load) and there weren't any great unknowns souring making the call.
Even with the rather oldschool approach of echo/logging.nim usage, things tend to turn around quickly. I have not felt the need to be able to attach a debugger to the process, mostly because our architecture is very pluggable. Almost all events/interactions are on a message bus and can be hooked/handled individually.
My experience is complete opposite. I find Nim to be very simple and Zig to be not simple. What's the problem with Zig? I find the documentation to be chaotic. I believe that this is largely due to the rapid pace of changes (including changes that break earlier code).
Before I started really digging into Nim, it seemed like it was always changing the language a lot (feature churn). However, most of those changes have been compiler support for different GC's and other backend languages, which don't generally break existing code. I've tried Nim code from 4 years ago, and its almost the exact same syntax. Sometimes stdlib names changed. I think syntax change is the part that gets people a lot in terms of daily "complexity".
It actually reminds me of the feel of Python 2, before Python 3 that seems to add new syntax & language complexity every release. Well Python 2 but with a more solid language theory (e.g. everything can be a statement, etc). The trickiest part of day to day Nim are: var vs no-var, object vs ref object, and some iterator annoyances.
I'm planning to write up a more detailed post outlining the architecture of this. But suffice it to say, Stardust is written 100% in Nim. Both the servers and the client running in your browser is written in Nim. The client uses Nim's JS backend and the server Nim's C backend. The two share code to ensure the game simulation is the same across both. Communication happens over websockets.
It's been a lot of fun working on this and I cannot imagine another language being as flexible as Nim to make something like this possible. I already have an Android client up and running too, it also is built from the same code and I plan to release it soon.
* JS, obviously
* Rust
* C/C++
* Anything else that can compile to WASM
How does that work, and is it an alternative to dart/flutter?
Commercial web application in Java. Java GC is concurrent, on other cpu cores than the cores processing customer requests. Modern GCs such as Red Hat's Shenandoah or Oracle's ZGC have 1 ms or lower pause times on heap sizes of terabytes (TB). Java 15 increased max heap size from 4 TB to 16 TB of memory.
Now the argument. A thread running on a cpu core which processes an incoming request has to earn enough to pay the economic dollar cost of the other threads that do GC on other cpu cores. But that thread processing the client request spends ZERO cpu cycles on memory management. No reference counting. Malloc is extremely fast as long as memory is available. (GC on other cores ensures this.) During or even after that thread has serviced a client request, GC will (later) "dispose" of the garbage that was allocated when servicing the client request.
Obviously, just from Java continuously occupying the top 2 or 3 spots for popularity over the last 15 years -- Java's approach must be doing something right.
That said, I find Nim very interesting, and this is not a rant about Nim. I am skeptical of an alternative to real GC until it is proven to work, in a heavily multi threaded environment. And there is that economic argument of servicing client requests with no cpu cycles spent on memory management -- until AFTER the client request was fully serviced.
I'm not sure if this argument holds. Java's high memory usage is frequently cited as a downside of Java. GUI applications written in Java have a reputation of being memory-hungry, and I know plenty of people struggling with memory usage of server application (e.g. ElasticSearch). You will also find C/C++ on the same popularity lists…
That said, I do agree that a tracing GC is a better solution (for most programs) than reference counting these days. The improvements in the pause times by the Java GCs are really impressive, and the throughput is great. One example would be ESBuild: The author created a prototype in both Rust and Go, and found that the Go version was faster allegedly because Rust ended up spending a lot of time deallocating [source: https://news.ycombinator.com/item?id=22336284].
I would rather optimize for performance and energy use at the cost of higher memory use.
See recent SN:
https://news.ycombinator.com/item?id=24642134
https://greenlab.di.uminho.pt/wp-content/uploads/2017/10/sle...
Memory is a one time capital cost and gets cheaper over time.
Reference-counting strategies are much easier to optimize; so if you have fewer resources available to throw at your compiler it's the way to go.
Adding value types and deterministic deallocation doesn't require endless GC research and was already available in languages like Mesa/Cedar and Oberon, features that Nim also has anyway.
> As far as we know, ARC works with the complete standard library except for the current implementation of async...
That's not a great endorsement...
> If your code uses cyclic data structures, or if you’re not sure if your code produces cycles, you need to use --gc:orc and not --gc:arc.
Seems like this is a big onus to put on the user -- it's tough to prove a negative like this.
* Nim's current async implementation creates a lot of cycles in the graph, so ARC can't collect them.
* ORC is then developed as ARC + cycle collector to solve this issue, and it has been a success.
* This 1.4 release introduces ORC to everyone so that we can have mass testing for this new GC and eventually move torwards ORC as the default GC.
TL;DR: ORC works with everything† and will be the new default GC in the future. Your old Nim code will continue to work, and will just get faster‡.
† We are not sure that it's bug-free yet, which is why it's not the default for this release.
‡ Most of the time ORC speeds things up, but there are edge cases where it might slow things down. You're encouraged to test your code with --gc:orc against our default GC and report performance regressions.
> ARC was first shipped with Nim 1.2... [ORC is] our main new feature for this release
Seems like they should phrase it like "use ORC unless you know you don't have cycles" rather than "use ORC if you're not sure you have cycles", but that's a reasonable responsibility to take on if you're choosing to use an alternative garbage collector.
A significant portion of the problems with cycles in ARC are parent references.
Since Nim uses the C compiler to generate executables, you should be able to use `--passC:-fsanitize=memory --passL:-fsanitize=memory` to enable msan. For maximum effectiveness the flags `-d:useMalloc --gc:orc` should also be used.
`-d:useMalloc` tells Nim to allocate memory using libc's malloc instead of our TSLF implementation. This should provide adequate compatibility for use with external inspection tools. We do
`--gc:orc` because this is one of the only GCs that support -d:useMalloc (the other being arc).
1. https://github.com/nim-lang/Nim/issues/14035 2. https://forum.nim-lang.org/t/6610
I'm hoping someone builds a great web framework with it, and a library like Ecto for better postgresql access. This language great potential to build faster software.
Here's a repo I found: https://github.com/Jipok/Nim-SDL2-and-Emscripten
In the end I still went with Rust, simply because it's more popular, but my initial impression was that Nim is a really fun language to work in, and much much easier to pick up than Rust.
That's my impression so far, too. Previously, I already had some experiences with C, Pascal, and Python. Then learning Nim just feels natural.
Not so much with Rust. Well of course it's not surprising, with memory safety as one of its goals.
Where is this idea coming from that memory safety has to be complicated? Nearly all languages (basically everything except C/C++/Assembler) people are using are memory safe. And usually it's not complicated at all.
Why did Nim decide to allow null pointers? They must have had a very good reason, given its young age?
That said you can already declare `type MyPtr = ptr int not nil` but the compiler is still clunky on proofs and needs a lot of help.
it is planned to have a much better prover in the future and ultimately Z3 integration for such safety features: https://nim-lang.org/docs/drnim.html
I’ve said it before and I’ll say it again ... I wish more folks would use NIM for web development.
I often wonder why anyone would use a language like Zig or Rust for Web development. I am very much biased in favour of Nim here, but to me in general a non-GC'd language seems like overkill for web development. So I would rule those languages out straight away.
I can't speak to Elixir, likely the main difference will be the lack of a mature Django-like framework. I'm assuming that Elixir has one, whereas Nim doesn't. If you're looking for a fun project, I would love to see that made for Nim and happy to give pointers if you need them :)
* Has a GC, but you can remove it. This is Nim and D.
* Relies on pervasive refcounting. This is Nim if you choose that implementation, Swift.
* Has no GC. This is Zig and Rust. (Though obviously you can use refcounting in these languages, but it is as a library.)
While this focuses on a specific aspect of these langauges, I think it also represents their philosophies pretty well. Nim and D start from a "what if we had a GC" and then try to make things nicer down the stack. Rust and Zig are how nice can we go starting from nothing?"
There are also additional factors that may or may not play in here, depending on what your needs are. Arguably, Rust is starting to break out of the "niche language" stage and move into the "significant projects and is sticking around" phase, whereas many of these other languages aren't quite there yet. This can matter with things like getting help, package support... some people love the open frontiers of new languages, others want something more mature. https://nimble.directory/ has 1,431 packages at the time of writing, https://crates.io/ has 48,197.
Same questions apply to Nim too of course, but I believe Rust's focus on newbies and pretty much trivial crates/cargo new pkg addition has led to a lot of cruft in there. Not to mention squatters (even if those are probably not a majority).
Also, I would challenge your classification of ARC being in the same category as Swift, memory-handling wise. Nim's ARC has hard realtime capabilities, would that be possible with pervasive RC?
- Nim and rust have macros.
- D has very high-quality metaprogramming (probably better than any other language without macros).
- (Afaik swift and zig have fairly normal templates. I don't know as much about those.)
- D and zig have compile-time function execution (think c++ constexpr on steroids on steroids).
- Swift is likely to be the slowest of the bunch; like go, though it's technically compiled to native code, its performance profile is closer to that of a managed language. The others should be generally on par with each other and with c.
I am super bummed that there is (effectively) no debugging. I am too lazy to mess with VS code to get gdb working, it should just work already. Someday, I guess. Maybe jetbrains will save us. With real IDE support nim would sweep the nations.
> I started the Nim project after an unsuccessful search for a good systems programming language. Back then (2006) there were essentially two lines of systems programming languages:
> The C (C, C++, Objective C) family of languages.
> The Pascal (Pascal, Modula 2, Modula 3, Ada, Oberon) family of languages.
> The C-family of languages has quirky syntax, grossly unsafe semantics and slow compilers but is overall quite flexible to use. This is mostly thanks to its meta-programming features like the preprocessor and, in C++'s case, to templates.
> The Pascal family of languages has an unpleasant, overly verbose syntax but fast compilers. It also has stronger type systems and extensive runtime checks make it far safer to use. However, it lacks most of the metaprogramming capabilities that I wanted to see in a language.
> For some reason, neither family looked at Lisp to take inspiration from its macro system, which is a very nice fit for systems programming as a macro system pushes complexity from runtime to compile-time.
> And neither family looked much at the upcoming scripting languages like Python or Ruby which focussed on usability and making programmers more productive. Back then these attributes were ascribed to their usage of dynamic typing, but if you looked closely, most of their nice attributes were independent of dynamic typing.
> So that's why I had to create Nim; there was a hole in the programming language landscape for a systems programming language that took Ada's strong type system and safety aspects, Lisp's metaprogramming system so that the Nim programmer can operate on the level of abstraction that suits the problem domain and a language that uses Python's readable but concise syntax.
I'm wondering why people underestimate graphs that much. It's lot easier to explicitly represent dependencies between your definitions as a graph and not just avoid such issues but also get rid of unnecessary passes. I did that in my compiler and it works great.
I can speak only for Red: it takes its heritage in Lisp, Forth and Logo, has an embeded cross-platform GUI engine with a dedicated DSL for UI building, C-like sub-language for system level programming, OMeta-like PEG parser, and unique type system with literal forms for things like currencies, URLs, e-mail and dates with hashtags; all of that fitting in one-megabyte binary and valuing human-centered design above all.
My interest in scientific applications pushes me towards Julia, but the user experience has so far been strictly worse than than Python, so I just don't bother with it as much as I might like to.
On the other hand, I am drawn to experiment with Nim (and to some extent Rust as well) because they feel better constructed, having more professional feeling tools and approaches to packaging. The downside is that their core strengths are in use-cases which aren't so aligned with my interests.
The strength of the Python packaging ecosystem makes me doubtful of the impact Julia can have. Meanwhile for Nim, it feels to me like awareness and adoption suffer a fair bit from competing with Rust for mindshare.
For example, if you want to do something more exploratory (like some research or data analysis) that can still easily scale up to HPC you can use Julia, if you want to create something reliable with small binaries and no start-up issues or use in more resource constrained environments you can use Nim.
What worries me is the fragmentation, and the fact that no one language seems to check all of the (subjective set of) boxes for a general purpose high-speed, ahead-of-time compiled language [0].
E.g, Crystal seems to be the only one supporting a modern concurrency story (similar to Go), but has a huge problem with compile times.
Nim looks nice in many respects, but last I checked, they don't have anything like Go-like concurrency. Maybe not on everyone's wishlist, but as the world move toward flow everything/everywhere[1], I personally find this to be a problem.
[0] https://docs.google.com/spreadsheets/d/1BAiJR026ih1U8HoRw__n...
[1] https://www.amazon.com/Flow-Architectures-Streaming-Event-Dr...
In short, I think at least for the linked to table you mentioned, Nim does check all the boxes.