The second you aren't just passing raw bytes around, you have to take into consideration what can and can't be sent between processes in Python, as some objects can't be pickled and thus can't be passed between processes.
You can concurrently load a lot more coroutines than processes and threads, as well.
Another reason, if you're using the Trio async library, is that managing and cancelling multiple tasks is really easy, and you can be sure that none get lost. This update to Python brings some of that to core asyncio (but I'll stick with Trio for now, thanks).
This is far easier to work with than multiprocessing.
When doing, e.g. AWS work, multiprocessing is additional pain for little gain.
Maybe 3.11 will make threads less painful.
The docs even warn about this for subprocess and suggest using asyncio to avoid it, although the docs are misleading - it only busy-loops if the timeout is not None, and only when running on Mac/Linux not Windows.
Multiprocessing provides parallelism up to what the machine supports, but no additional degree of concurrency, asyncio provides a fairly high degree of concurrency, but no parallelism.
OF course, you can use them together to get both.
Some IO-bound workloads are suited really well by the asyncio model, while other workloads might be better suited for processes and threads. They're three separate tools whose use cases might be similar, but they're not necessarily replacing one another. Multiple processes still have their place even while asyncio exists and vice versa.