Desync takes a slightly different approach to asynchronous programming: instead of being based around the idea of scheduling operations on threads, and then synchronising data across those threads, it's based on the idea of scheduling operations on data.
There's only two basic operations: 'desync' runs an operation on some data in the background, and 'sync' runs one synchronously.
All operations are run in order and 'sync' returns a value so it's a way to retrieve data from an asynchronous operation. It's sort of like setting up some threads with some data protected by a mutex and sending results between them using mpsc channels, except without the need to build any of the scaffolding. ('sync' also makes borrowing data from one task to use in another effortless)
I’ll show myself out.
https://rust-lang.github.io/async-book/getting_started/state...
Is missing, somewhat ironic?
Feels very much like the state of async matches the state of the guide. :P
What is the state of async? Is it close? Is it still changing with the futures 0.3-beta not finalized?
Are we six months away? A year?
What is going on with futures 0.3? Why is everyone still using 0.1?
How does that relate to these issues?
It superficially appears like the whole async story is still in a concept stage...
How does async translate calls to other async functions? Is refactoring into smaller async functions less efficient? If not, how does it deal with (possibly indirect) recursive function calls? Does it give up or select a loop breaker?
And what is the purpose of the pingpong between executor->Waker->push onto executor?
I am also still unsure what the approach to multithreading might be. Multiple executors with work stealing or one dispatch executor with worker threads or something else still?
There's nothing special going on. Remember, async on a function is something like
async fn function(argument: &str) -> usize {
to fn function(argument: &str) -> impl Future<Item=usize> {
so, when you call an async function, you get a Future back. That's true even if it's inside of another async function.> If not, how does it deal with (possibly indirect) recursive function calls?
Recursive calls to async functions will fail to compile: https://github.com/rust-lang/rust/issues/53690
That said, see that discussion; the trait object form will probably eventually work.
Heavy recursion isn't generally Rust's style, since we don't have guaranteed TCO, so you threaten to overflow the stack and panic.
> Is refactoring into smaller async functions less efficient?
That's a complicated question. It really depends. I don't think it should be, thanks to inlining, but am not 100% sure.
> And what is the purpose of the pingpong between executor->Waker->push onto executor?
Right now, the best resource is https://boats.gitlab.io/blog/post/wakers-i/ and https://boats.gitlab.io/blog/post/wakers-ii/
> I am also still unsure what the approach to multithreading might be.
You have options! Tokio now does multiple executors with work-stealing by default, in my understanding.
From the second blog post I actually found https://github.com/tokio-rs/tokio/pull/660 which switched tokio from 1 reactor+worker threads to n reactors with work stealing.
I believe it builds the calls up into one larger future, so it shouldn't be any less efficient.
I can't answer any of the other questions with any certainty.
Skimmed through https://vorner.github.io/async-bench.html. If I understand it correctly, one get about twice the performance with async.
Is this correct? Seems like a compromise (code complexity vs performance) not worth taking.
And, for web servers, it can be more than 2x. For example, look at techempower's plaintext benchmark: https://www.techempower.com/benchmarks/#section=data-r17&hw=...
Hyper gets 7,013,819. It's async. Iron gets 109,815, and is synchronous. That's 63x. Iron uses hyper under the hood, so that should be a good comparison.
But you are correct, if you don't have a specific need, async is generally harder than using threads for concurrency. Ideally the async/await work in Rust is going to make that trade-off less extreme than it is today, which may mean more people will feel comfortable using it as it should reduce boiler plate.
Could you expand on that? I've never heard that mentioned about async before.
If so, I'd argue that long term once async/await have landed properly, the code largely looks and behaves the same. With that said, I've not even used it yet, because I've got no clue when this is landing enough that I can reasonably use it.. and I'm on Nightly lol.
Yes, the code the developer needs to read, write and understand.
I'm not familiar of how async/await will be in Rust, but I guess some code differences/complexities can be:
1. Make sure, manually(?), that all things are async / non-blocking.
2. Implementing Future.poll / wrapping types in Future? (What is Pin? ref https://rust-lang.github.io/async-book/execution/future.html)
3. Async polution, a function that uses async must be async too?
4. Setup some scheduler that maintain how many concurrent async operations one thread has?
5. More verbose error-messages / stack-traces?
Tokio is a good, default choice, but some projects may have different needs.
Tokio itself is great though, so if you have no strong reason not to use it, I'd recommend it.