Honestly, I don't feel that I spend time managing memory in Rust: I use ARC pointers for long-lived shared object (Database connections pool, mailer...) and otherwise the ownership is pretty straightforward during the lifecycle of a request, data is moved from the top layer (HTTP handlers) to the bottom (Repositories to access Database) and back for the response.
I am mainly interested to know how much overhead time is spent appeasing the borrow checker and managing memory that would otherwise free up mental cycles if a GC were available . The async story for Rust also seems confusing (but I hold my hands up and plead ignorance on this count).
Does that mean you basically enforce sequential database reads? Seems like a bottleneck if your server is concurrent
If they're sharing a connection pool then each HTTP request would get its own connection out of the pool, and they'd be concurrent (access to the pool may or may not be serialised depending on the pool's details, sqlx is internally mutated not externally locked for instance).
The mailer might be behind a mutex (its access completely serialised), or the "mailer" might just be the input side of a queue / channel, and the actual mailing work be done in a separate process (that seems way more likely than bounding the request on sending emails really).