After years of dealing with points of frustration in C++ land, I've created my own programming language. It emphasizes compile-time code generation, seamless C/C++ interoperability, and easier 3rd-party dependency integration. It's like "C in S-expressions", but offers much more than just that.
I had a hard time trimming this article down because of how excited I am about the language. Feel free to skim and read more in the sections that pique your interest.
I don't expect everyone to love it and adopt it. I do hope that some of the ideas are interesting to fellow programmers. I found it eye-opening to realize how much better my development environment could become once I opened this door.
[0] https://www.reddit.com/r/gamedev/comments/kh1p0a/cakelisp_a_...
I made the same thing a while back, and one of the neat simple things you can do is implement function-overloading a la C++. All you need is to define a way to serialize types to strings that are valid identifiers; then you (1) append the string-forms of the types of each function parameter to the name of the function at the definition site, along with a normal function that will do the dispatching in the second part, and (2) do the same thing for the type of the arguments at each call site. Et voila! Function overloading! Not quite as powerful as C++, which takes conversions and stuff into account, but it's an interesting experiment nonetheless. You can see how I did it here: https://github.com/zc1036/ivy/blob/master/src/lib/std/overlo... (DEFUN2 is the version of DEFUN in my language that supports overloading.)
I used something similar to your technique for compile-time variable destruction. The compiler doesn't know the type, so a macro generates a callback which deletes a casted version. These callbacks are named with the type so they can be lazily added and reused.
How did you implement the macro expansion? Are you translating the macros to C/C++, then compile it with C/C++ compiler and execute the temporary binary or do you have an interpreter for that?
I work on a somewhat similar project called Liz, which is basically a lisp-flavored dialect of Zig [1]. I did not implement user-defined macros yet, planning to learn more about comptime and its limitations first. But the compiler itself uses macro-expansion to implement many features.
This is a serious exaggeration.
Common Lisp has extremely good compilers that can meet C performance.
There are plenty of Scheme implementations (I use Chez) with very good performance characteristics too.
I think Lisps tend to optimize for throughput, but games have very strict latency requirements. Garbage collection pauses could cause frame pacing issues (not that C solves that completely, but it is at least not a built in disadvantage of idiomatic use of the language)
¹ Frequently on par or even better than LuaJIT, though it can take some work to get it there.
Provided memory is no object.
In general it takes five times as much memory for a GC'd program to be as performant as one with explicit memory management. See: https://www.cics.umass.edu/~emery/pubs/gcvsmalloc.pdf
There's a reason why the most interesting work these days is being done in and on languages like Rust, which has no GC but still saves you 90% of the work and close to 100% of the pain of bugs that are inherent to explicit memory management.
https://devblogs.microsoft.com/aspnet/grpc-performance-impro...
In this blanket form, the statement is just wrong. Yes, with GC you need to have a larger heap space, as unreferenced objects will remain on the heap until collected and you want to have enough heap space so collections are infrequent enough, that a lot of objects can be collected (especially with generational GC, you want low survivor rates in the youngest generation).
However, how much space you want to reserve for that depends on many factors. Usually the extra space is proportional to the allocation and deallocation rates, not the total heap size. If you have lots of data on the heap which is long-living, this doesn't count to the extra space. Which leads to the allocation behavior of your program in general. If you want best performance, your program shouldn't blindly create garbage, but only, where it is needed. A lot of data can be stack allocated, so not counting towards the GC. And of course, you can have some amount of memory manually managed (depending on language), for bulk data. Be it entirely allocated outside the GCed heap or by keeping references alive to memory that manually gets reused. In all of these cases, this doesn't really count towards the extra space calculation.
The programming language used plays a huge role in this and the paper you quoted uses Java, which is a quite allocation-happy language, so the heap pressure is higher and you need more extra space to be performant.
https://github.com/kiselgra/c-mera
https://github.com/eudoxia0/cmacro (Written in Common Lisp, doesn't use S-exp syntax)
https://github.com/tomhrr/dale (Prev disc: https://news.ycombinator.com/item?id=14079573)
Newer stuff:
https://github.com/saman-pasha/lcc (No mention of meta-programming)
Lesser known:
https://github.com/deplinenoise/c-amplify (No docs, no update since 2010)
p.s. Carp is still actively maintained, should be on the other list.
Just want to note that there is a large benefit to this kind of safety even if you're not writing safety-critical code: lack of bugs! The biggest benefit I've seen from rust is that entire classes of bugs, some of which can be extremely difficult to root cause and fix, are removed by design. So you spend significantly less time on the later half of the project tracking down bugs, which is more than enough to offset the productivity loss at the beginning.
So while safety and maintainability are what Rust gets marketed for, the ergonimics and just overall productivity of the language is enough to sell me on it for game dev. Languages like Zig and Jai also seem interesting in this space, but they're far from being ready to do anything in production with. The Rust ecosystem is actually ready for production now, and the language is a pleasure to work with.
https://gamelisp.rs/reference/performance-figures.html#bench...
It seems more intended for scripting glue code in a Rust project
https://voodoo-slide.blogspot.com/2010/01/amplifying-c.html https://github.com/deplinenoise/c-amplify
Some of the downsides mentioned can easily be taken care of by a macro I believe. Like the path traversal can be flattened with a macro most likely. Same thing for the type definitions.
const char* myString = "Blah";
(var my-string (* (const char)) "Blah") (var my-string conststring "Blah")
with the appropriate typedef.Edit: The GitHub repository implies that this could be used for purposes other than games:
> The goal is a metaprogrammable, hot-reloadable, non-garbage-collected language ideal for high performance, iteratively-developed programs (especially games).
Once I get pure C output supported, it should be suitable even for embedded C environments.
Why did the creator create this, wont Carp have served the purpose.?
When I say seamless, I'm going for as close as possible, I.e. it should feel even easier to use C from Cakelisp than from C itself. The build system especially makes this possible.
Doesn't mean it doesn't perform well, currently writing a GBA game: https://github.com/TimDeve/radicorn
Cakelisp differs from Zig in that arbitrary code modification is supported, which is a step closer to the modifiable environment of Lisps. Very few languages besides lisp allow you to do things like "iterate over every function defined and change their bodies, and create a new function which calls them". Zig does not support that. It's extremely useful for some tasks (e.g. hot-reloading code modification, mentioned in the article)
Jai isn't out yet, but I was inspired by many of the comptime ideas.
Lots of things don't work in d's CTFE.
It's certainly very useful, and much better than e.g. c++ constexpr. But it's still worlds away from proper lisp metaprogramming.