Many of SBCL's optimizations are fine grained selectable, using internal SB-* declarations. I know I was at least able to turn off all optimizations for debug/disasm clarity, while specifically enabling tail recursion so that our main loop wouldn't blow up the stack in that build configuration. These aren't in the main documentation; I asked in the #sbcl IRC channel on FreeNode.
You can directly set the size of the nursery with sb-ext:bytes-consed-between-gcs, as opposed to overprovisioning the heap to influence the nursery size. While we've run in the 8-24GB heap ranges depending on deployment, a minimum nursery size of 1GB seems to give us the best performance as well. We're looking at much larger heap sizes now, so who knows what will work best.
While we haven't hit heap exhaustion conditions during compilation, we did hit multi-minute compilation lags for large macros (18,000 LoC from a first-level expansion). That was a reported performance bug in SBCL and has been fixed a while back. Since the Debian upstream for SBCL lags the official releases quite a bit, it's always a manual job to fetch the latest versions, but quite worth it.
Great read, and really familiar. :-)
[1] https://github.com/tshatrov/ichiran/blob/master/dict-grammar...
I wonder if they're still hiring Lispers. I once passed on the opportunity to work in their Kiev office, but I might give it a shot again.
I used CL in a production environment a while back for a threaded queue worker and nowadays as the app server for my turtl project, and I still have yet to run into problems. It seems like you guys managed to push the boundaries and find workable solutions, which is really great.
Thanks for the writeup!
The Praetorian Bootcamp challenges are similar to the Matasano ones: https://www.praetorian.com/challenges/
Lisp Koans: https://github.com/google/lisp-koans
Project Euler: https://projecteuler.net/
Exercism: http://exercism.io/
99 Lisp Problems: http://www.ic.unicamp.br/~meidanis/courses/mc336/2006s2/func...
http://www.codewars.com/ also supports Clojure, and Haskell, which is not a Lisp but is FP.
Hacker Rank pretty much supports everything, but the reason for this is that it handles all the tests through stdio instead of a test suite, resulting in a lot of irritating boilerplate code.
The deployment is the very well understood JVM and standard web servers. There's a common HTTP middleware framework (Ring), lots of choices for HTML generation and ClojureScript allows some code-sharing with your client side (compiles to JS).
On the back-end, you could use any Java library. On the front-end, you could use any JS framework (e.g. see https://github.com/omcljs/om)
Check out this:
https://github.com/bhauman/flappy-bird-demo
It uses figwheel for the dynamic changes of state when you change to code, and it renders it without reload ala Bret Victor style. The first time I saw it I was amazed. It speeds up prototyping so much.
The out of the box performance is decent as well, Ring and Hiccup is pretty lean but you can go for more heavy frameworks (I don't have any experience with those).
I personally use Reagent + Ring, found it easy to use and get productive in a day.
If you haven't used LISP or any homoiconic language before it might look little weird at first but I found it easy to explain to people.
About the toy part, what is your definition of toy?
I may be in the minority, but that would drive me mad. I assume they're not routinely jumping between those stacks multiple times a day, but even so is there really that much benefit that it's worth keeping track of how to do things in that many different environments?
"In our study of program design, we have seen that expert programmers control the complexity of their designs with the same general techniques used by designers of all complex systems. They combine primitive elements to form compound objects, they abstract compound objects to form higher-level building blocks, and they preserve modularity by adopting appropriate large-scale views of system structure. In illustrating these techniques, we have used Lisp as a language for describing processes and for constructing computational data objects and processes to model complex phenomena in the real world. However, as we confront increasingly complex problems, we will find that Lisp, or indeed any fixed programming language, is not sufficient for our needs. We must constantly turn to new languages in order to express our ideas more effectively."
At a certain level of software design tying together different programming languages that are each the right tool for their job becomes just another type of programming. I currently do data science work, but even then in a given week I typically use R, Python, Lua and Java (and often Scheme in the evenings for fun). Trying to make any one of those languages do something the other is much better at is a phenomenal waste of time.
On the system level, once prototyping ends, if there's something that Java does phenomenally better than R, but we need both, that implies that you have two parts of the system different enough that they shouldn't be tightly coupled anyway. If you write a deep learning algorithm in Lua, but want to do some statisitical analysis on the results of that in R, it's good to force these things to be separated because if in 5 years you find a better model for the Lua part (maybe some better algorithm in Julia) you want to be able to swap it out anyway.
The "best" might change over time, too.
It can be a headache to manage massively-polyglot environments. At the same time, it's also pretty great for a variety of reasons. I mean, we regularly use different data stores, messaging solutions, frameworks, etc. and I don't see why languages shouldn't also be up for shuffling.
Most shops don't think like that. Using too many languages often quickly becomes unmanageable.
I wouldn't use 6 different languages into the same project UNLESS they are part of different server/cli tools that work in isolation. I don't need to have a deep understanding of Go to use Docker, I don't need to be a PHP expert to deploy Drupal or Wordpress , I don't need to know Ruby to use vagrant,nor Java to use Cassandra. So using these tools in isolation is fine inside the same project.
You must know Javascript in this day and age given it's de facto presence on the web.
Python is my go to language, especially for mathematical analysis. I can do everything in Python that I used to need Matlab for. From my point of view, pick your favorite modern scripting language and run, the differences really are the libraries, not the languages.
I have used Go, but Go just doesn't do it for me. It doesn't offer me anything I can't get, better, in another language especially if I can choose among Python (smaller headspace), Erlang (way better concurrency) or Lisp (way better abtraction power).
Erlang is my go to language for concurrency. Once I architect it in Erlang, I probably understand the problem.
Lisp is useful when my problem requires powerful abstraction. Otherwise, it gets in the way because people can't resist using that power. Clojure has changed my opinion on this quite a bit, but I don't yet have a big project that fits in it's space quite yet.
Anyone here has any experience with the GCs of Allegro or LispWorks or any other commercial Lisp implementations?
Franz Inc uses Allegro CL in a large database. They tuned the GC quite a bit for that. But there were also other GC demanding applications on Allegro CL, for example in CAD and 3D design. They are now working on a concurrent GC, something which is still rare in the Lisp world.
Supposedly they have a concurrent GC in the works but I haven't played with it.
> I've worked a lot with LispWorks and tuning the gc had the same method of programmatically calling a full GC after every N operations.
What is "full GC", here? Do you mean, even the "older" generations? (Assuming Lispworks also has generational GC a la sbcl)
In other words, Would it have helped if the implementation was a mark-compact rather than generational?
Which I exactly why I feel Lisp doesn't see much use elsewhere :(
"We've built an esoteric application (even by Lisp standards), and in the process have hit some limits of our platform. One unexpected thing was heap exhaustion during compilation. We rely heavily on macros, and some of the largest ones expand into thousands of lines of low-level code. It turned out that SBCL compiler implements a lot of optimizations that allow us to enjoy quite fast generated code, but some of which require exponential time and memory resources. "
The more nasty errors are lurking for example in the GC... there we move into C and assembler land...
Most platforms have nasty errors. With popular platforms one can hope that many of these have be found and somebody has fixed them already. With language/runtimes which are no so widespread in production one is more likely to find these problems oneself. Especially in more complex runtimes.
- workaround
- investigation (and in this case, if you're on a closed platform you're busted)
Write an "esoteric" app and you'll start hitting the limits of your platform.
My two cents, if your macro is expanding to thousands of lines of code, your doing something wrong I think. I would expect Macros to expand out to a few lines of code which might have function calls that themselves may contain however many lines of code... but expanding to thousands of lines INLINE via macros seems wrong.
And before anyone jumps on me -- I have 35 years of Zetalisp and Common Lisp experience and have written some pretty hairy sets of macros. There might turn out to be no easy way to make the expansions smaller, but there's nothing wrong with asking the question.
https://github.com/lispgames/cl-sdl2/blob/ec64831109a17b0dda...
Edit: It's still not advice I would pay for, though.
There was ViaWeb: http://www.paulgraham.com/avg.html
I'm told that Orbitz uses or used it a lot too.