This means that c++26 is getting a default coroutine task type [1] AND a default executor [2]. You can even spawn the tasks like in Tokio/async Rust. [3]
I’m not totally sure if this is a GOOD idea to add to the c++ standard but oh well.
[1] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p35...
[3] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2025/p31...
What are the downsides? Naively, it seems like a good idea to both provide a coroutine spec (for power users) and a default task type & default executor.
[1] https://devblogs.microsoft.com/oldnewthing/20210504-01/?p=10...
In general this moves way too fast for the density of the grammar it's trying to introduce, lines like:
> We have seen Awaitors already - suspend_always is an empty awaiter type that has await_ready returns false always.
But we haven't "seen" suspend_always, it's mentioned in half a sentence in an earlier paragraph, with no further context or examples.
There's a reason Lewis Baker's writings about C++ coroutines are 5000 word monsters, the body of grammar which needs to be covered demands that level of careful and precise definition and exploration.
A stackful coroutine is "write the live registers to your stack, swap the stack pointer to a suspended coroutine, load the old live registers from your new stack". It's a short and boring sequence of assembly.
A C++ coroutine is a CFG transform with a bunch of logic around heap allocation elision to construct something less capable than the above, with a bunch of keywords and semantics that you can kind of derive from the work the compiler needs to do to wire things together.
FWIW, I think a useful addition would be for compilers to output the intermediate source code, so you can reason more easily about behaviour and debug into readable code.
- At the transport layer, I read in a header on a message (which may come in one byte at a time!), get a size for the serialized message, then read N bytes for the message. The simple way to do things is to use a thread per socket, but that results in a lot of wasted memory, depending on how many sockets there are. Instead I use epoll, but now I can't make the simple for loop reading in bytes for the message - I have to have a buffer + allocated size + current size + state enum, wrapped in a struct, then run a switch statement every time I get an epoll event for the socket.
- At half a level higher, there might be multiple messages or other negotiations that need to happen before we can start to stream messages to the owner of the connection. Once again - need to either use a thread or a state enum to keep track of where we are.
Even if you want the enum to be able to report state, you can still set it somewhere for debug purposes.