If your memory usage doesn't plateau you have a memory leak which would be caused by a bug in your code or a dependency.
But 500 to 1gb of memory required for a production rails app isn't unusual. Heroku knows this, which explains their bonkers pricing for 2gb of memory. They know where to stick the knife.
That is not correct. Ruby do unmap pages when it has too many free pages, and it obviously call `free` on memory it allocated once it doesn't use it.
What happens sometime though is that because of fragmentation you have many free slots but no free whole pages. That is one of the reason why GC compaction was implemented, but it's not enabled by default.
But in most case I've seen, the memory bloat of Ruby applications was caused by glibc malloc, and the solution was either to set MALLOC_ARENA_MAX or to switch to jemalloc.
On the last fairly large rails app I tried to use jemalloc on there was no change in memory usage. I believe that advice is a bit outdated. Also note using jemalloc doesn't cause memory to be freed to the system. It reduces fragmentation, at the cost of cpu cycles. There's no free lunch.
Yes, because extra empty pages are released at the end of major GC, which is occasional, and most web application will cyclicaly use enough memory that they will stabilize / plateau at one point.
> I believe that advice is a bit outdated.
It absolutely isn't, your anecdote doesn't mean much compared to the countless reports you can find out there.
> Also note using jemalloc doesn't cause memory to be freed to the system.
Yes it does, it has a decay mecanism, most allocators do. https://jemalloc.net/jemalloc.3.html
> It reduces fragmentation
Yes, and that allows it to have more free pages that it can release.
> at the cost of cpu cycles
Compared to glibc, not so much.
That’s why good modern allocators like mimalloc and tcmalloc return memory when they notice it’s going unused, so that other services running on the machine can access resources. And this is in c++ land where things are even more perf sensitive.
But if you really do need to cheap out you can generally configure your app server to kill idle worker processes, or bounce them on a schedule to return memory to the system, and hope.
Killing “idle” processes is also extremely expensive because you have to restart the process, reload all state, and doing graceful handoff is tricky.
It’s good to have graceful handoff for zero downtime upgrades, but I still say having your allocator return RAM is the cheapest and easiest option and something good modern allocators do for you automatically.
Extremely bold claim for a framework the size of ruby on rails. I would trot out my own evidence but the receipts are lost with time.
Also—why isn't the allocation behavior tweakable at runtime? Seems pretty trivial with no downsides. It's not difficult to think of a scenario where a non-monotonically-increasing-heap-size is desirable.
Memory management is handled by the language.