If golang had this, then it might not ever need to run its GC because it could just fire up a new region for every request. The request will likely end and blast away its memory before it needs to collect, or it could choose to collect only when that particular goroutine/region is blocked.
Extra benefit: if there's an error in one region, we can blast it away and the rest of the program continues!
[0] https://tutorial.ponylang.io/types/actors.html#concurrent
[1] https://cone.jondgoodwin.com/fast.html
[2] https://verdagon.dev/blog/seamless-fearless-structured-concu...
edit: reading comprehension skills are lacking, please see comment below for why I'm wrong
Turns out no one on the team actually looked into issues in the Go repo to see if it was being addressed. Looks like they just wanted to write Rust, which is fine Rust is cool, but let’s not deceive ourselves.
(Anyone know if they're still using Rust?)
It has been an incredible success. I plan to blog more about it in the coming months. Our usage of Rust is continuing to grow, and if you check out our jobs page, you might notice all backend / infra jobs list Rust in them now :)
I think probably 40% of requests are handled directly by rust services now, with the rest involving one or more rust service called from our Python API layer.
Makes it pretty hard to find stuff!
Yes, Rust provides a more predictable, faster memory management model than Go. At the expense of unpredictable, expensive memory leaks triggering application termination.
Curious how much time and effort was dedicated to improving gc, which is a useful endeavor in its own right.
That said, I've worked on several embedded systems, and the never allocate memory rule that most of them had for runtime was critical to maintaining real-time-like performance. One was written in C++, which meant that we basically couldn't make use of most of the stl and boost. We had to roll our own implementations of plenty of data structures used on the performance critical threads as a result. I couldn't imagine using a language with GC baked in for such a system. But the results spoke for themselves: microsecond level latencies and performance that scaled well with increased CPU core counts.
With discord, I imagine a big reason why Rust was considered as an alternative to Go is because they already have a substantial Elixir codebase. Rust and Elixir have a very easy time communicating with one another via Erlang NIFs (native function interfaces). You can embed languages like C/C++/Rust into elixir without much overhead. While I've never personally tried do to such a thing with go, I can't imagine its a smooth experience. You'd probably need to use Ports or CNodes for Go simply for this reason.
I love go myself, but one of the biggest turn offs for the language is its FFI support for C and other C connected languages. CGo is a relatively expensive investment when compared to many other comparable alternatives and it should be avoided if possible.
We use Rust over Go, not only because of the garbage collection issues, but because it's truly a better language in almost every way (once you learn it!)
I will say, Go is much easier to pick up, but in exchange you pay in the long term having a language that actively works against you when you start working on more advanced programs, and a mountain of code that's accumulated over the years that you have to maintain.
We work on high concurrency systems here, and I very much enjoy not ever having to think about "is this thing thread safe" because the compiler is checking that for you. I love being able to use the type system to offer my co-workers powerful, but difficult to misuse libraries. I like having sensible abstractions around concurrent execution.
Like, for example, if you create a channel in go, and for whatever reason, don't try to read from the channel, or give up (because you're racing a timeout), then the goroutine that tries to write to that channel will block forever and leak. In Rust, if you try to write to a channel where there is no longer a receiver, the write to channel will return an error, which you can then choose to handle, or simply ignore depending on your use-case. Of course, you can be wise and allocate your channels with a capacity of 1, but you can also just completely forget that, and start a steady leak of goroutines for the lifetime of your program that the garbage collector won't save you from!
Want to execute many futures with bounded concurrency in Rust and collect the results back into a Vec, but give up if any of the futures fail, or if a timeout is elapsed, and also make sure that all allocated resources are properly dropped and closed in the event that any errors happen? Just combine a futures::stream::StreamExt::{buffer_unordered, collect}, and a tokio::time::timeout, and in a few lines of code you've done it.
Want to do the same in Go? Spawn a pool of goroutines I guess, distribute two channels, one for sending them work, one for receiving work, and don't forget to throw in a WaitGroup, pass a context along to all the goroutines, make sure you don't forget any defers, if you are using a shared resource, make sure it's thread safe, or make sure you're locking/unlocking the appropriate mutex, make sure you size your result channel appropriately or you might leak goroutines and any allocations they hold if your main goroutine that's spawned all that work timed out waiting for the results to come in. Is there a library that does all this for you in Go? I googled "golang run many goroutines and collect their results" and looked at the first page of results, and it's basically the above...
It is no surprise then that we've picked to use Rust pretty seriously. When you're looking to build reliable systems with serious speeds and massive concurrency, you pick the best tool for the job. That for us is Rust, not Go. And for our real time Distributed systems, we pick Elixir, because BEAM/OTP is just so dang good.
Rust’s borrow checker does defend against it as well for safe rust, unless you are doing something very stupid, this is just false.
Not saying this is the case here but highly likely.