Looking around a bit, I came across a couple of implementations that might be of interest for those wanting to mash up "some kind of system programming" and "scheme": Bigloo (interpret / compile to executables, java byte code or experimentally dot.net) and larceny (interpret / direct compile to machine code / optionally via c):
is another good one - compiles to binary, many libraries (eggs)
I know, for example that people want Racket VM to be implemented in Chez Scheme because Chez is super fast. But what about all other implementations?
Also, as I'm currently writing R5RS/Clojure hybrid in Kotlin, can anyone please share any _simple_ standard algorithm of implementing r5rs macro system and macro expander?
The only thing I could find is https://www.cs.indiana.edu/chezscheme/syntax-case/
And people say that Scheme has a very minimalist design and is easy to implement...
As for Racket, IIUC the plan is to migrate the current C VM to a Scheme one building on Chez Scheme. It won't be that Racket will be running on Chez. The Racket fork of Chez and Racket itself will be one and the same thing.
Gerbil's docs have an example of this: https://github.com/vyzo/gerbil/blob/master/doc/tutorial/lang...
There is also a srfi for an improved hygiene low level macro facility that is rather elegant. Can't remember the number though. 70-something.
You must be new here..
Sadly Stalin is unmaintained. Its whole program optimization techniques were really advanced. I remember it even got my ivory-tower professors, who were big in the static analysis field, excited.
Now, a tricky question. I'm mostly unfamiliar with Scheme for writing real-world code. Will the ongoing merger of Chez with Racket make the latter a clear winner in the Scheme camp? How are libraries and FFI?
A problem with Scheme is excessive fragmentation. Having a clear winner would be cool for library support. Racket is great due to multiple paradigms and DSLs [1]. I hope it eventually becomes a very practical Lisp with all Mozart/Oz semantic goodies.
I am trying to do all my projects in a Lisp. Lately this is either Clojure or SBCL. Both have good libraries and decent FFIs. Clasp (Common Lisp on LLVM) [2] has gotten me excited, as good interfacing with C++ will be great to access a lot of quick numerics code.
[1] https://beautifulracket.com/appendix/domain-specific-languag...
Academically they have all the super giants of Scheme. It is now within the past few years with the renaming to Racket that the success story is just now unfolding after over 20 years of development. I think Racket will end up being the clear leader not just of scheme but of Lisp. Reminds me of when R just took off 5 or 6 years ago.
Yes. And not only that, but also the fact that each Scheme implementation is slightly (at best) different from all other implementations. I guess that is because Scheme standards are not that strict (for example, compare with Java specs)
There is certainly a reason why he has stuck with Gambit and not my beloved Racket. This seems perfect for using Racket's macros. I just don't know Gambit's Macros well enough to compare them to Racket.
Here is the Bench Marks for Chez, Racket and Gambit https://ecraven.github.io/r7rs-benchmarks/
https://github.com/racket/racket7
Is this like research work like Pycket (Racket on PyPy) or is this a blessed project and Rackets official implementation will cut over
> TL;DR: I expect the main Racket distribution to run on Chez Scheme instead of the current Racket VM sometime in the next couple of years.
https://groups.google.com/d/msg/racket-dev/2BV3ElyfF8Y/4RSd3...
I took it to mean a scheme used for developing close to the hardware programs, where as maybe others are taking it to mean a language which can be used to write common scripts and tools used by Systems Engineers.
Comes with an interpreter. Supports macros, etc. and Gambit supports easy C FFI, option for infix syntax, massive threading, full numeric tower
Easy to hack on (in my opinion), and performs very well. Gambit-C has been my "goto" Scheme for the past decade.
Could be used as an "extension language", but not its main purpose. Mostly, a way to deliver Scheme code that runs fast.
Racket has a better IDE -- but I like the simplicity of Gambit-C.
Why do you think it can't be done?
It was already used for systems-programming in the early 80s; whole machines were programmed in Lisp in low level: TI Explorer, Xerox workstations and others.
I've seen it used for anything from operating system kernels to database systems to anything that talks to the network to command-line utilities like ls.
Garbage-collected languages are fine for all of these except maybe kernels. But even there, you might use a system that lets you do critical sections in a mode where the GC is disabled, or in a sublanguage that is guaranteed to be GC-free.
But when you can do real-time video processing in an (at the time) young alternative implementation of python[1] - I'm not sure what we're arguing about anymore..?
[1] https://morepypy.blogspot.no/2011/07/realtime-image-processi...
There are a few HFT firms, most notably Virtu, that use Java. But my understanding from having interviewed people that worked there is that it's so convoluted to avoid GC pauses that you might as well be using C++.
Well Niklaus Wirth for one was able to design an entire OS using a language (Oberon) that had garbage collection.
> There are a few HFT firms, most notably Virtu, that use Java.
The hedge fund where I worked used Java. We didn't have problems with latency, although admittedly we weren't doing the really low-latency stuff. There are patterns that you can use in Java that basically emulate manual memory management. In Java this is a little harder than in C# because there no value types, but java.nio basically lets you do whatever you want, so you can always allocate everything up-front. In fact, just for fun I did this myself in an audio setting. So basically I had an audio engine with a ring buffer to pass messages to the event loop, and with a little care I was able to ensure that there literally no garbage that made it past the first phase (i.e. locals). Those are essentially free, so basically the GC wasn't getting any pressure at all. It wasn't that hard to write, and with the excellent profiling tools available on the JVM it's easy to see which if you're making use of longer-term GC sweeps or not.
Finally, I would add that there are scenarios where C++ moves into automatic memory management. std::shared_ptr is an example of (shitty) automatic memory management, but there have been efforts, notably by Herb Sutter, to provide precise GC-collection as a library. For some non-blocking multithreaded algorithms, GC schemes are actually necessary since allocation becomes one of the prime vectors through which blocking occurs.
So conclusion, while it's certainly the case that most systems programming is done in languages with manual memory management, it's a little less binary than you suggest. That said: I'm writing a DAW (zenaud.io) and for that I am using C++, mainly for memory management :)
Modern desktop and mobile device operating systems often exhibit embarassing lulls in responsiveness that resemble pauses in a rudimentary garbage collector.
Real-time operation can be "bolted on" to a non-real time operating system as a small specialized kernel which has higher priority access to the CPU and its own, separate resource management.
I've done a very minimal amount of this... the gist is that you avoid GC pauses by avoiding allocation. This translates into reusing objects using pools, etc.... and the assorted complexities that come from having to explicitly manage object lifecycles. In critical applications you often want to avoid dynamic memory in C/C++, so it may not be all that different.