You can do the same with goroutines/green threads/virtual threads, without putting the burden of differentiation between sync and async functions on the programmer.
The only argument for async/await syntax I've ever seen is either "it allows traditionally sync languages to use async" (compatibility) or "it gives the compiler more information so it can make stuff faster" (speed boost).
So to your grandposter..:
> The point of async APIs is not speed boost, it's decoupling processing from the local call stack (which happens to hang up the GUI until the routine resolves, but also forces components to be tightly coupled and monolithic).
NO, just no! Async or similar approaches were motivated by super parallel concurrency (classic example is connection handling for a webserver) to have better performance vs the overhead you'd have with os thread primitives (and even there, nowadays, that is just a motivation that is not always true anymore..)
In no way is the point of "async style" decoupling.. that we can do on a lot of levels with a lot of primitives.. especially this is very unneeded for UIs where you can decouple UI from Cpu processing from everything else with usually (depends, sure) two to three permanent threads.
On top of that, async style is horrible also for our mental models.. most clear code happens with simple control flow, classical threads (no matter if green or os) shine there because they stick to that model much more than async.
Async style was and is still mostly for performance, definitely not for decoupling and also not for the nicer programming model..
But yeah, motivations and sense nowadays sadly often gets lost over hype :(
Because of memory footprint and thread contention.
OS thread's default stack size is often in the order of megabytes. On a server with 64GB of ram, that means you can't run more than ~64000 threads at once. That's not really a high number in the context of modern highly-concurrent servers.
Meanwhile, goroutine's (and probably green thread's and virtual thread's from languages other than Go) default stack size is in the order of kilobytes, allowing you to run millions of them concurrently.
Thread contention wastes CPU cycles on kernel-level context switches and may lead to hard-to-debug issues such as thread starvation. You generally have no control over how the OS scheduler manages OS threads, so without sophisticated thread synchronization mechanisms, you're relying on blind luck.
Userspace threads are usually scheduled by the language runtime itself, which gives it a higher level of control. For example, Go runtime schedules goroutine in a round-robin fashion, guaranteeing that all goroutines will have some kind of progress in a reasonable amount of time.
EDIT: Since this post is about UI, yeah, "classical" OS threads are pretty good choice, since you usually only need a single OS thread to handle all the UI events, while the rest of the system can do the processing. So both the "stack size" and "contention" arguments are not really relevant in that scenario.
Obviously the OS does not allocate megabytes of actual physical RAM to thread stacks, it's just address space. Just, this:
https://unix.stackexchange.com/questions/127602/default-stac...
Please, we started here with a GUI framework and how someone said async is not about performance - in the end you underline my point? I said it was motivated by massively concurrent use cases that require a high number of threads... (and that similarly motivates green threads et al, full agree).
No - that's completely wrong.
Event loops existed prior to them being popularised for IO scaling - they were used in GUI for way longer.
Async is just a way to transpose continuation based programming and the callback hell involved in dealing with event loops.
Writing UI code even in multithreaded code, without async, is a PITA because UI frameworks expect UI state to be updated on the UI thread - so you need to do work on thread X then schedule a callback on UI thread and update UI state. With async you just fire off a task, await with scheduler on the UI thread and you have linear code flow.
"Simple Made Easy" by Rich Hickey: https://www.youtube.com/watch?v=LKtk3HCgTa8
I'm old school, I want the runtime to do as much work for me as possible so I don't have to. Basically that looks like the runtime providing things like process isolation and concurrency, even if the underlying hardware can't do that. Especially if we're using a high-level scripting language like Javascript anyway. Rust I could maybe see at least providing access to async functionality, but I'd vote specifically against that footgun and go with lightweight threads and message passing (how Go does it) or scatter-gather arrays (there may be a better term for this) with the compiler detecting side effects and auto-parallizing everything else like loops. The simplest way to facilitate that is to use immutable data as much as possible, passed via copy-on-write (the Unix way).
The idea of async being scattered around operating systems and kernels and such is anathema to my psyche. Code smells setting off my spidey sense everywhere I look. To the point where if the world goes that route, I just don't think we'll have determinism anymore. That makes me want to get out of programming.
Note that I feel the same dismay about stuff like the DSP approach used by video cards, where the developer has to manually manage vertex buffers, rather than having the runtime provide a random-access interface. Not being able to make system calls from shaders is also tragic IMHO. We've lost so much conceptual correctness in the name of performance that it breaks my heart. The cost of that is the loss of alternatives like genetic algorithms, which could have provided a much simple roadmap to get to the inflection point we're at with AI, 20+ years ago.
It all just makes me so tired that I feel like some guy yelling at clouds now.
Async/await syntax is needed if you want to have a `with` block that managed resources across co-routine boundaries.
Consider Python's `async with` which will create a resource that is freed when the co-routine leaves the execution context.
This is distinct from Java's try-with-resources which doesn't work with async code. So anytime you use `try (TelemetrySpan.start()) { blah.read().andThen(x->send(url);}` it doesn't do what anyone would ever want. Hopefully Loom fixes it.
So the async and sync distinction is needed/useful if you have both.
I don't see what is the connection between async/await syntax and managing resources. The `with` block is just another mechanism for managing resources - Go has the `defer` statement which runs the deferred function at the end of currently executing function block, providing the equivalent functionality without need for async/await syntax. The `with` blocks could easily be implemented in Go, but Go doesn't like duplicating functionality in the language.
> Consider Python's `async with`
Python is a traditionally synchronous language, so the async/await syntax in Python is necessary if you want to use the async runtime, while still having compatible syntax with the traditional sync runtime.
> So the async and sync distinction is needed/useful if you have both.
Every sync call can be trivially modeled as an async call that is always awaited. If you want to bolt-on async runtime onto a sync runtime, you need async/await syntax. Other than that, I don't see the value it brings to a language at all.
goroutines capture a lot more state than an async continuation/future. The same argument you made below against OS threads applies here too.
For instance:
fn f() { var v1 = ..., var v2 = ...; g(v2); }
fn g(v2) { await; /* do something with v2 */ } // await is a context switch
A userspace thread captures v1 and v2, an async computation typically only captures v2. Compound this by all variables on the stack up to the await point, and the difference can be substantial.