I grew up with cooperative multitasking on Mac OS and used Apple's OpenTransport heavily in the mid-90s before Mac OS X provided sockets. Then I spent several years working on various nonblocking networking approaches like coroutines for games before the web figured out async. I went about as far down the nonblocking IO rabbit hole as anyone would dare.
But there's no there there. After I learned Unix sockets (everything is a stream, even files) it took me to a different level of abstraction where now I literally don't even think about async. I put it in the same mental bin as mutexes, locking IO, busy waiting, polling, even mutability. That's because no matter how it's structured, async code can never get away from the fact that it's a monad. The thing it's returning changes value at some point in the future, which can quickly lead to nondeterministic behavior without constant vigilance. Now maybe my terminology here is not quite right, but this concept is critical to grasp, or else determinism will be difficult to achieve.
I think a far better programming pattern is the Actor model, which is basically the Unix model and piping immutable data around. This is more similar to how Go and Erlang work, although I'm disappointed in pretty much all languages for not enforcing process separation strongly enough.
Until someone really understands everything I just said, I would be very wary of using async and would only use it for porting purposes, never for new development. I feel rather strongly that async is something that we'll be dealing with and cleaning up after for the next couple of decades, at least.
I agree, it does seem like a step backwards in general. However, for Rust it makes sense. There is no runtime, so there is nothing to preempt the green threads/lightweight processes etc. But yeah, with higher level languages like Python, I was disappointed to see how async was emphasized in 3.x over green threads which were already used by a number of projects.
I also believe the way all of this is presented is not the right abstraction. Actors + CSP is probably the best way. Plus, even if concurrency <> parallelism I think the parallelism idioms make more sense (pin to the "thread", do fork-joins, use ring-buffer for channels, etc).
However, I suppose the whole issue is that async as-is is easier for the mechanical support that work for the compilers and allows to squeze the performance/resource usage, that is important for Rust.
But maybe keep it hidden and surface another kind of api?
That's the problem with monadic stuff in general. One solution to that might be to keep the async part on the "edge" of your programs (a bit like the functional core, imperative shell pattern or the hexagonal architecture), write all your logic without async and use async only on the edge.
"The Unix library provided with OCaml uses blocking IO operations, and is not well suited to concurrent programs such as network services or interactive applications. For many years, the solution to this has been libraries such as Lwt and Async, which provide a monadic interface. These libraries allow writing code as if there were multiple threads of execution, each with their own stack, but the stacks are simulated using the heap.
The multicore version of OCaml adds support for "effects", removing the need for monadic code here. Using effects brings several advantages:
1. It's faster, because no heap allocations are needed to simulate a stack.
2. Concurrent code can be written in the same style as plain non-concurrent code.
3. Because a real stack is used, backtraces from exceptions work as expected.
4. Other features of the language (such as try ... with ...) can be used in concurrent code.
Additionally, modern operating systems provide high-performance alternatives to the old Unix select call. For example, Linux's io-uring system has applications write the operations they want to perform to a ring buffer, which Linux handles asynchronously."
[0] https://news.ycombinator.com/item?id=28838099
[1] https://www.youtube.com/watch?v=hrBq8R_kxI0
[2] https://overreacted.io/algebraic-effects-for-the-rest-of-us/
But I haven't seen any public discussions on the future of Rust governance, how to make the core team accountable, or other consequences since.
I'm somewhat invested in Rust, and it's a bit worrying to see this from two places.
By the way, using Go as an example is a joke since -- from the early Go bootcamp I attended in 2014, the best practice has been to use a 3rd-party http router (these days: gorilla? httprouter? chi? etc) instead of the one provided in the standard library. Instead of being _told_ what to use, let's get back to being interested enough that we read the docs, take in the reviews & benchmarks, and decide for ourselves.
Your Go example is not quite comparable:
Go: (Lang, libs)
- You can mix and match any libraries.
Rust: (Lang, runtime, libs)
- Now you can only choose the libraries for your runtime. This dilutes the time investment of crate developers and the utility of Cargo crates, as you want a general async thing but it is tied to a specific runtime.
I think the Rust team should of included a solid zero config runtime but allow it to be replaced.
This doesn't have to be a blessed runtime in std, but could be just a set of common interfaces (basics like AsyncRead, sleep, and spawn), so that async crates don't have to directly depend on a specific runtime.
You're mistaken. Using a third party router doesn't lock you into a particular subset of the ecosystem. E.g., I tend to use the Gorilla router by default, but I can use it with any middleware that implements the standard http.Handler interface.
Rust made the deliberate decision to avoid the heavier Go goroutines runtime model after early alpha/beta experiments showed it conflicted with Rust's low-level design. I found 3 links to some history of that rationale in a previous comment:
https://news.ycombinator.com/item?id=28660089
And some more links:
https://stackoverflow.com/questions/29428318/why-did-rust-re...
https://github.com/rust-lang/rfcs/blob/master/text/0230-remo...
And lots of debate in this previous thread: https://news.ycombinator.com/item?id=10225903
But this practically requires some kind of garbage collection and a fat runtime.
I think it was a good decision on the Rust team to abandon this and go for a low level systems programming language. Otherwise it would've been just another Go-like language that isn't really usable in low level systems programming.
Implementing portable async language features without a fat runtime or garbage collection is novel work so it's no wonder that it's taking its own sweet time to reach maturity.
On the semantic side, extending it to a N:M threading model like Erlang or Go would work great. But that model only seems to work well if you basically make the entire language async, which is in conflict with too many of rust's goals. So we are left with the somewhat awkward state of async as a second class citizen.
How does go nest aync calls?
func f() { }
func g() { }
func h() { go g() go f() }
What happens on
f()
Are the g() and f() calls inside h() blocking? Or are they async and the block happens at the point of return? Which would be the main difference to languages with an async keyword, were you need to be explicit about blocking.
A better comparison would be between the Rust and C++ paths to async - C++ also spent years designing their async system, and the end result is divisive at best.
Go’s runtime model just makes stuff like this vastly simpler. Rust can’t impose the same kind of runtime model that go has.
Go and Java (with Loom) have these lovely facilities, but it is hard to interface with them if your language lacks these features. I find it odd that C#, javascript, and python don't provide the smoother async Go experience despite having runtimes / VMs.
Unfortunately, according to one of the lead developers, the system couldn't keep up with Mozilla's throughput and reliability requirements due to limitations of go's built-in features.[0] They announced they would re-write a new solution in c ("Hindsight") and they basically left an entire community of users high and dry due to not being able to salvage the go-based project, since it relied so heavily on the built-in features.
[0] https://heka.mozilla.narkive.com/9heQ11hz/state-and-future-o...
It’s not that Rust has struggled, it was never Rust’s priority to have a runtime or high level async code. It had very different goals to Go.
It’s like asking “why has C struggled to implement Promises like JavaScript”? The languages serve different purposes.
You’re right it wasn’t their initial focal point but later on Rust wanted to offer the chance of having a go-like runtime without destabilising the low-level performance at the core level. I.E only those that use it pay for it and those that don’t use it aren’t affected.
Offering “zero cost” futures etc is very difficult to do.
See https://aturon.github.io/blog/2016/08/11/futures/ for more info, old but still relevant (including the chart)
For example, Go runtime imposes unavoidable overhead in memory usage, because each goroutine must have its own allocated stack (Rust futures, on the other hand, are stackless). Rust runs on low memory platforms where Go isn't really suitable.
Architecture of an efficient async runtime is going to be different for 128-core server vs single-threaded chip with barely any RAM. In Rust you can write your own runtime to your needs, rather than fight overhead of a big runtime on a small device, or struggle to scale a dumb runtime to complex workloads.
For instance, C is a great abstraction. You take assembly language, abstract away manual management of registers with variables and pointers, add structured types to describe memory layout, standardize flow of control operations, and add functions to enable code reusability, and you have something which is very easy to work with and also to understand. It's not 100% on par with assembly in terms of performance, but it's pretty darned close, and with a little bit of practice it's very easy to look at a block of C code and basically understand what equivalent assembly it compiles to. It's a great abstraction, and it's no wonder that a vast majority of the languages which have come after it have borrowed most of its major features.
I would argue we haven't really had a "great abstraction" to the same level since then*. There have been efforts to abstract away memory management the way register management has been abstracted away, and many of them have been successful for a lot of use-cases, but not to the point that everyone can forget about memory management the way the vast majority of us can forget about register management. Garbage collectors can be too slow or too wasteful for a lot of use-cases, and you need essentially another program you didn't write to pull it off. In a GC'd language it's not so trivial to look at a block of high-level code and predict what your CPU will do. There are other approaches: like the structured approaches of Rust and Swift which are quite interesting, but they're far from proven at this point.
Similarly I think we're not quite there yet with concurrent programming. As far as the transparency topic, a lot of async implementations are more in the direction of garbage collectors, where the compiler rips apart your code and builds a state machine in its place. It's not hard to believe that the result will be difficult to work with and reason about in some cases.
And maybe the problem is that most approaches to async are trying to cram concurrent execution into that C-like abstraction, which is such an elegant abstraction for single-threaded execution exactly. Maybe concurrent programming needs to be re-thought from first principals, with different primitives involved.
*Aside: if there is another "great abstraction" on the horizon, I believe it to be ADT's. That is a feature of programming which feels like a clear step forward with no clear downsides. It's a shame that they haven't been included in Zig.