First, async/await does NOT mean "threading" or "multiprocessing" or "concurrency". It simply means "using a state machine to alternate between tasks, which may or may not be concurrent." Right?
Further, in Javascript, futures and async are utilized heavily because we so frequently need to wait for IO events (i.e.: network events) to complete, and we don't want to block execution of the entire page just to wait for a IO to complete. So the JS engine allows you to fire off these network events, do something else in the meantime, and then execute the "done" behavior when the IO is complete (and even in this case, we might not be concurrent, because ).
That makes sense to me.
But say I have written something in Rust that makes use of async/await. And say there is absolutely no IO or multithreading. Say I have some awaitable function called "compute_pi_digits()" that can take arbitrarily long to complete but does not do IO, it's purely computational. Is there any benefit to making this function awaitable? Unless I actually spawn it in a different thread, the awaitable version of this function will behave identically to if it were NOT awaitable, correct?
And one last idea: the async/await pattern is becoming so popular across vastly different languages because it allows us to abstract over concepts like concurrency, futures, promises, etc. It's a bit of a "one size fits all" regardless of whether you're spinning up a thread, polling for a network event, setting up a callback for a future, etc?
Both in JS or Rust, you don't gain anything just by declaring your thing to be async, or awaitable. Your function needs to be built around some kind of "primitive" that explicitly supports the "do something else in the meantime" mechanism. Using "await" on that thing lets your function piggyback on its support, but all your explicit, normal code is synchronously blocking as usual.
In Rust, I think it's a fairly established pattern to turn blocking code, where the blocking part is not some IO action that has explicit support for the futures mechanism, into a asynchronous, awaitable function by punting the work to a threadpool. That makes sense for CPU-bound work as well as IO done by libraries that don't support futures or things like disk IO where the OS might not actually have decent support for doing it in a non-blocking fashion.
I'm not sure if it's the canonical mechanism, but this crate seems to implement what I'm thinking of: https://docs.rs/futures-cpupool/0.1.8/futures_cpupool/
I'm not familiar with Rust but in JS you do gain something. Just the fact that the function is async means that it now explicitly returns a promise, which means that anything awaiting that promise will be in a new execution context and will definitely not run synchronously.
If you introduce suspension points in that (e.g. every 100 computed digits), then you can co-schedule other tasks (e.g. a similar `compute_phi_digits`) or handle graceful cancellation (e.g. if a deadline is exceeded, or its parent task aborted in the meanwhile).
Well, then we can simply optimize away your entire program as it does nothing and running it has no side effects.
Even if your program is entirely CPU bound, there are uses for writing in an async/await style. As an example, parsers can be quite natural to write in that style.
Or you can use it to have multiple computations running at the same time, and give updates on their progress. It is voluntary time slicing, which is significantly less overhead than the OS doing time slicing for you.
Software threads, from the OS to your application, run on one of these cores for a certain amount of scheduled time, then get switched out with some other thread. If threads are waiting on IO then they're not making much use of the time they get and are wasting CPU capacity.
An older approach is to just make more threads and switch them out faster, but this is very inefficient. Await/async is a way to let threads not get stalled by a single function and switch to a different function in that process that does have work available. It's basically another step of granularity in slicing CPU time within a thread.
The keywords do not force anything, they are just signals to the underlying software that it may pause and come back later if necessary, along with setting up state to track results. Some methods may still run all on the same thread if there's nothing else to do, or if the async result is already available and there is no waiting needed.
Most async/await is usually built on top of yield, generators, promises or other constructs that basically are state-machines or iterators.
The async call itself doesn't return intermediate results, though, so you'd have to handle that a different way. And if you want to cancel the task, you need another way to handle that too.
Something like computing the digits of pi would be better represented by a stream or iterator since the caller should decide when it's done.
See http://blog.ploeh.dk/2016/04/11/async-as-surrogate-io/ for further discussion. Regular programmers are not using Task<T> as IO-monadic marker consciously, but they are surprised when a usage differs from that model.
But ultimately, to make use of async, you need async primitives - something that lets you say "do this in the background somehow, and let me know once you're done". Any async/await call should ultimately end at one of those primitives, and it's at that point that another call might get interleaved. If you don't actually do I/O or anything else that can do a non-blocking wait, you're not getting anything useful from async.
C# async/await is also very much resumable state machines
However the execution aspect is a bit different: In C#, once a leaf future/Task gets resolved, it will in many cases sychronously call back into the state machine which awaited the task Task (by storing a continuation inside it). A whole promise chain might resolve synchronously directly on the stack of the caller. And "in many cases", because the whole thing depends on some very subtle properties like whether a SynchronizationContext or TaskScheduler was configured.
In Rusts task system a leaf future will never call back into the parent. It will always only notify the associated task executor, that it can retry running/polling the Future to completion. When the task gets executed again, it will run again from the normal scheduler thread in a top-down fashion.
This makes Rusts system a little less performant for some use-cases, but also a lot less error-prone (no synchronization issues because it's not known where some code runs). It also is one of the key ingredients for avoiding allocations on individual futures.
Javascripts system is closer to the C# mechanism, but avoids the error-prone part: When a leaf future is finished, it will lead to calling the continuation of the parent future. However this is never done synchronously, but always on a fresh iteration of the eventloop (to avoid side effects). That works fine for Javascript because the eventloop is guaranteed (it's not in C# async code), and Futures are on the heap anyway.
They will use language-level generators if compiling to ES 2015, or user-land generators if compiling below that.
Is the website author here? What are you running server side that’s giving such great performance?
Pinning is required here because your AsyncRead read_to_end returns a future bound by some reference lifetime?
Yep, the generator created by quote_encrypt_unquote is creating internal self-references from the future created by read_to_end into the AsyncRead it's storing in its environment, while this is happening the AsyncRead must not move and therefore the generator must not move, which is what pinning represents.
But the rest of the event-loop machinery is quite different. JS's async is still fundamentally callback-based. Rust's futures are polled. In JS there's a single global event loop and promises run automagically. In Rust you create futures managed by their executors, each handling its own kind of tasks (CPU pools, network polling).
Would you mind elaborating on the polled aspect of rust futures, or link me to some documation? Do you mean that there is a loop polling the result of a future? How does that work with things like select?
Without this, you sometimes had to write a write a wrapper function that does some synchronous setup and returns a Future, which was a bit annoying for stylistic reasons.
There's an interesting but somewhat old discussion here:
https://www.reddit.com/r/rust/comments/8aaywk/async_await_in...
I wonder if anything changed since then? I'm not a Rust programmer so I didn't really understand the article.
"... performing a CPS-like transform where an async function is split into a series of continuations that are chained together via a Future::then method"
they are referring to c# and js implementation of promises/futures here
State machines are also what Clojure(Script) core.async uses.
(Easy choice as there are no continuations available)
Go sort of does the same thing, but insists on running fibers in separate threads at its convenience; which means giving up the lovely simplicity of cooperative multitasking for the same old multi-threaded circus.
I'm unfortunately not aware of any languages more recent than Smalltalk that get this right. My own baby, Snigl [0], is just getting to the point where it's doable.