For those problems that are amenable to Erlang's model, this is a fine solution. The only real improvement here would be making collection incremental.
Overall this is a good model. Use GC for small per green thread heaps. Then use reference counters for shared immutable structures that cannot form cycles and copy everything else.
It's not incremental per process, but I'm not sure it would even matter that much in practice.
The modern GraalVM does have isolates but its a VM specific feature and not a java standard feature.
More copying if you pass values between processes. Honestly it would be really cool if you could mark off certain values that you know you're going to pass around and put them in a heap like the global binary heap.
As binary strings work their way through the pipelines via messages, it leaves binaries on the binary heap that don’t go away because the ref count stays above 1. There are a number of GC parameters one can tune on a per process level that might cause a long lived process to collect more aggressively. But my long lived processes have a natural “ratchet” point where it was just easy to throw a collect in. This solved all of my slow growth memory problems.
I’ve read elsewhere that Erlangs GC benefits often on the basis that must Erlanger processes are short lived.
- tortoise311 - I’ve toyed rewriting my own. We do very simple MQTT, 0 QoS, no wills, etc. the existing implementation creates many long lived procs per connection and we keep our connections live; they’re mostly subscribers
- bandit/plug - originally I was doing Phoenix because That’s The Thing, but it was such “A Way”, I was constantly having to learn how to accommodate things I just ended up turning off or suppressing. I just have straightforward (imo) API endpoints; Mat Trudel suggested I might just use Bandit with Plug. He’s done a great job with Bandit and been very proactive; just doing Plug myself helped me understand the whole HTTP handling pipeline at a more fundamental level
- CacheX - we use credentials oauth workflow. We were able to implement that in a single plug and use cachex. I may throw that out eventually. I’ve heard people indicate cachex has hung on them and it’s easy enough to do your own here
- Mint - I tried Finch and a couple other “help you” request frameworks. I had all kinds of problems tuning them as I moved up to many thousands of steady stream (every 10s+) hooks being dispatched. Eventually, I saw a comment in one of them that said something like “at any scale, you end up doing your own layer on top of mint to best fit the nuances of your application”, so I did just that, using the source from peppermint and finch to guide/inspire me
- openapispex -to swaggerify our endpoints; this requires a lot of boilerplate code and forced me to learn to write some of my own macros just to reduce it a little; I understand you get some of that for free when using it with Phoenix; the authors have been really helpful
- recon - because
There’s probably some stuff I should use that I’m not. But I’ve got a limited amount of time to improve this and keep native apps on two platforms running.
If I blogged, it’d be a good write up (how to do a kind of web thing — but without pages — without Phoenix!) maybe.
If one can avoid GCs altogether a-la precise (de)allocations like Rust's non-reference-counted entities, this is cool but often requires unnatural contortionism. RC is still necessary in certain cases.