In JS, for example, the `bluebird` library is a third party utility for managing execution of functions. You can do things like
const results = await Promise.map(users, user => saveUserToDBAsync(user), { concurrency: 5});
And I pass in thousands of users, and can specify `concurrency: 5` to know that it will be execute no more than 5 simultaneously.Implementation of this behavior in user space is trivial in JS, is it possible in Rust?
https://docs.rs/futures/0.3.0/futures/stream/trait.StreamExt...
Not only does the `futures` crate provide most things you'd ever want, it also has no special treatment – you can implements your own combinators in the same way that `futures` implements them if you need something off of the beaten path.
Additionally, you're making snarky comments about how you don't like how the base language doesn't handle something like JS...then reference a third party JS library. Base JS doesn't solve your 'problem' either.
To answer your question, async/await provides hooks for an executor (tokio being the most common) to run your code. You things like that in the executor.
https://docs.rs/tokio/0.2.0-alpha.6/tokio/executor/index.htm...
I'm not complaining about things without taking the time to understand how the language works, I'm giving examples of things that don't seem possible based off of my understanding of how the language works...in hopes that someone will either clarify or accept that this is a shortcoming.
Rust gives you all the flexibility you need here. It might not be trivial yet because all the adapters might not be written yet, but that's purely a maturity problem.
The `join` macro does nothing magical. Go check out its implementation, and it will make it obvious how to implement a concurrency argument.
The primitive operation provided by a Rust `Future` is `poll`. Calling `some_future.poll(waker)` advances it forward if possible, and stashes `waker` somewhere for it to be signaled when `some_future` is ready to run again.
So the implementation of `join` is fairly straightforward: It constructs a new future wrapping its arguments, which when polled itself, re-polls each of them with the same waker it was passed.
There are also more elaborate schemes- e.g. `FuturesUnordered` uses a separate waker for each sub-future, so it can handle larger numbers of them at some coordination cost.
And from a quick scan of the source it doesn't look like anything there is impossible to implement in userspace: https://docs.rs/futures-util/0.3.0/src/futures_util/future/j...
If what you want to do is run multiple CPU-bound computation and have a central event loop awaiting the result, then yes, you'll need to spawn threads and use some kind of channel to transfer the state and result. If what you want is to run multiple IO-bound queries, then you'll want to use the facilities of the event loop of your choice (tokio, async-std, etc...) to register the intent that you're waiting for more data on a file-descriptor.
The "proper" way to execute without awaiting it on the current future is usually to spawn another future on the event loop. The syntax to do that with tokio is
use tokio;
let my_future = some_future();
tokio::spawn(my_future);Instead you call `Future::poll`, which runs a future until it blocks again, and provide it a way to signal when it is ready.
That signal would be handed off to an event loop (which tracks things executing on other hardware like network or disk controllers) or another part of the program (which will be scheduled eventually).