However, Go and other languages have a huge ecosystem and many more libraries. Nim only has a few web servers/frameworks, for example. Even if Nim's web frameworks/servers (like httpbeast) are quite fast[2], they lack the completeness that exist for other languages.
Until then, if you are looking for a systems programming language, you owe it to yourself to investigate Nim[3], alongside Go, Crystal, Julia, D, Rust, Haskell, etc. The tooling is fantastic and the Nim compiler produces clean, cross-platform C code (or JS[4]!) that can be automatically fed into gcc, clang, mingw-w64, etc. It's a language that's undergoing rapid changes, but almost all of the changes are in the libraries and it's exciting to see all of the innovation there -- and as libraries increase and mature, it will become a really compelling application language as well.
The community is extremely active, and issues are promptly dealt with. For HNers, it's an opportunity to still make a huge difference by contributing to a relatively young language, compared to getting drowned out by all the noise in a more mature language community.
0. https://nim-lang.org/docs/tut1.html
1. https://en.wikipedia.org/wiki/Metaprogramming
2. https://www.techempower.com/benchmarks/
Go, Crystal, Julia, Haskell and Nim all have a runtime that sort of precludes their use as a true systems lang. I agree with your other points, and I like Nim, I wish it was more popular, but I can't convince myself it's worth the time investment to learn it. I know Rust and Haskell already, between those two there isn't much space where Nim would be a good fit that the others aren't.
- Algol 68
- Mesa/Cedar
- Sing#
- System C#
- Modula-2+
- Modula-3
- Oberon
- Oberon-2
- Active Oberon
- Oberon-07
- Component Pascal
- Lisp (regarding Lisp Machines and its derivatives)
- Java (when deployed bare metal on embedded devices, e.g. Aicas, PTC)
- Swift
- D
- C# (when used alongside .NET Native, IL2CPP, CoreRT, Netduino, meadow)
The term is nebulous though.
It would be better for someone to work on an alternate, Python-like input syntax (focusing on readability and good intuition, and perhaps more attractive to novice programmers) for some established language, like Rust. Working on a "young" language, you just miss the chance of contributing to an ecosystem that's already been in development for quite some time, and where efforts aren't going to be left stranded as the bulk of the dev community chooses to go for something else.
Would you still make this point if you were comparing...
for example, Rust and C++?
... where Rust is the "young" language? Working on such a young language (and, FWIW, Rust is younger than Nim), you might miss the chance of contributing to an ecosystem that's already been in development for quite some time.
Not every language grows up with a silver spoon from Mozilla or Google.
I think the only globally optimal option in hobby open source projects is doing whatever seems more fun for you, and for many people is using that language that no one uses (or making your own language that no one uses).
For example:
https://github.com/drujensen/fib
https://gist.github.com/sdwfrost/7c660322c6c33961297a826df4c...
https://github.com/kostya/benchmarks
My general sense is that Nim is most competitive as an alternative to Python and/or things like Julia, in that it's as expressive and easy to understand as those, but has performance closer to C or Rust than something like Python. Julia is similar to Nim but I think Nim seems cleaner in its implementation overall, and targets much more general use scenarios.
I've been really impressed by Nim. There are some little things that have irritated me, like case insensitivity, but I wish it got more traction in the community. Right now the only thing that it doesn't seem to have going for it is library support. For the numerical applications I do I'd prefer it over everything else, except everything else has huge resource bases, so something that's already packaged in those languages would have to be done from scratch in Nim, which isn't feasible.
* vs many languages: https://github.com/frol/completely-unscientific-benchmarks
* vs Julia and Python: https://github.com/SimonDanisch/julia-challenge/issues/1
* web frameworks vs many: https://github.com/the-benchmarker/web-frameworks
* Safe and Automatic Live Update for Operating Systems, https://www.cs.vu.nl/~giuffrida/papers/asplos-2013.pdf
* Automating Live Update for Generic Server Programs, https://www.cs.vu.nl/~giuffrida/papers/lu_tse16.pdf
And I'm sure hot code reloading has been done in Forth for decades.
Otherwise, it's such an overwhelmingly pleasing toolset to work with. Clean code, ultrafast compilation producing rocksolid executables as small and tight as you care to make them. Joy all around.
Yup. It's too bad there has been no en-mass conversion/import of popular libraries.
The thing that makes python such a terrific swiss-army-knife is that there are libraries available for just about every imaginable use case.
One takes an interpreted function, which can be defined at runtime, compile it to assembler and have the assembler generate machine code in binary program space. The Lisp system then notes that this is now a compiled function.
http://www.softwarepreservation.org/projects/LISP/book/LISP%...
> The LISP Compiler is a program written in LISP that translates S-expression definitions of functions into machine language subroutines. It is an optional feature that makes programs run many times faster than they would if they were to be interpreted at run time by the interpreter.
> When the compiler is called upon to compile a function, it looks for an EXPR or FEXPR on the property list of the function name. The compiler then translates this S-expression into an S-expression that represents a subroutine in the LISP Assembly Language (LAP). LAP then proceeds to assemble this program into binary program space. Thus an EXPR, or an FEXPR, has been changed to a SUBR or an FSUBR, respectively.
...
> 1. It is not necessary to compile all of the functions that are used in a particular run. The interpreter is designed to link with compiled functions. Compiled functions that use interpreted functions will call the interpreter to evaluate these at run time.
> 2. The order in which functions are compiled is of no significance. It is not even necessary to have all of the functions defined until they are actually used at run time. (Specialforms are an exception to this rule. They must be defined before any function that calls them is compiled. )
It's in the CL standard, and compliant implementations support it. SBCL, LispWorks, Allegro, CCL, and others all do AOT native compilation, from the REPL and via eval (which should be avoided, in any case). ABCL does AOT compilation to JVM bytecode, and Clisp to its own bytecode.
It's a cool feature for Nim to have, but claiming to be the first makes them look bad.
[1] https://docs.microsoft.com/en-us/visualstudio/debugger/edit-...
It's worth a comparison to Common Lisp which thought about this feature a lot more and built it into the language rather than having it hacked in by external tools... e.g. if you redefine a class definition in Java say to add a new storage field, well you can't using default hotswap but with JRebel you can, but still any existing objects will continue to point to the "old class"'s code and the new field won't be available... Common Lisp defines a generic function 'update-instance-for-redefined-class that you extend before you do the class edit, and now your existing objects will work with the new code.
- Type system: Strong and static typings.
- Support Generics (Unlike Go)
- Support Modules (Unlink C++)
- Multiple and optional GC.
- Metaprogramming support.
- Executed binary.
What excites me most is the efficiency as C.
I hope it'll get mature more this year. At least for the Javascript backend, so that i could write Nim for frontend (with bindings for popular JS frameworks like React, Vue,..) , too.
The overhead of that context switch is a bit high but it allows the code loading facilities of the BEAM to reload HiPE compiled modules. This works because all processes yield to the scheduler, which acts as a kind of code-swap safe-point. The usual module-local vs module-remote call rules apply here when old versions are purged.
erlc +native
You will get beam file containing both the byte-code and native code.It’s been fun. Thanks for that, Nim.
"Each thread has its own (garbage collected) heap and sharing of memory is restricted to global variables. This helps to prevent race conditions. GC efficiency is improved quite a lot, because the GC never has to stop other threads and see what they reference. Memory allocation requires no lock at all! This design easily scales to massive multicore processors that are becoming the norm."
To me, that sounds perfect for writing typical apps.
C++ did it in the 80's with Lucid C++ and the ill fated Visual Age for C++ v4.0.
Eiffel also thanks to its MELT VM.
Hot code-reloading part of the talk: https://youtu.be/7WgCt0Wooeo?t=1519
It has been merged to devel branch: https://github.com/nim-lang/Nim/pull/10729
What does LCL stand for in this context?
The holy grail of code reloading is to upgrade the code of a HTTP server while it is running and without disturbing any requests being processed. Very few languages except for Erlang are able to do that correctly. Some languages claim to support that, but when you experiment with them you discover "quirks" making it impossible in practice.
Since HTTP is transient, it's actually a bit easier than a raw socket since you can expect it to go away soon, or even in the case of HTTP2, you can often expect to just close a socket as long as it's not currently active and get away with it. Many languages can smoothly upgrade an HTTP server by handing off a listening socket to a new process with new code. But even that won't save you for live sockets, because even if you hand off the socket, you haven't got a clean mechanism for handing off its accompanying state.
Several interpreted languages can sort of do this, but I'd call it in an "unprincipled" manner by just slamming new code in place and hoping for the best. Erlang explicitly upgrades the gen_* instances and you can provide a function for converting the old state to the new state cleanly.
That's what I meant. :) Hot reloading when nothing is "in flight" isn't so hard. The Erlang the Movie example, hot-fixing a PBX without disturbing phone calls in progress (real time requirements!), is really hard.
I imagine that runtime type errors is what makes this possible.
You can kind of do it with node.js, but man do things get ugly fast when you manage state/connections in modules on reload.
It seems like it forces pointers for everything which, which seems to me will break a lot of optimizations (like inlining)
Edit: I guess production use is limited anyway since you can't add, modify, or remove types with HCR yet.
Edit2: I just saw in the video they said it's 2x times slower, so definitely can't be used for a lot of workflows in production. Still useful though.
I'm aware of hot code reloading for interpreted languages, google just gives links to web page loading. I'm also aware of it at the OS level. I don't really understand why you'd want this in a compiled language.
Shared-library-based plugins, kernel modules, ...
Why not?
Wish I could help you with this, but I have been out of the loop with Nim recently.
> I don't really understand why you'd want this in a compiled language.
Quick iteration in development. Not needing to manually recompile every time you make a small change.