I think I can agree with you, but first I think we'd have to somehow define how to even approach this feature.
Eg, fundamentally I feel like you're making a distinction between blocking I/o and a "blocking" for loop. At the end of the day, they're the same in my view - one is just more likely to be costly.
So I think for this feature to be done right, we'd have to somehow be able to analyze the likelihood of an expensive operation - and the negative consequence that action might have on the rest of the workload. Eg, I would want the same hypothetical behavior and compile-time warnings/errors that a huge file-load might cause, with a huge loop.
Otherwise a simple function call which involves no I/O and looks innocent could have the same terrible behavior as some I/O call does.
Defining that, and informing the compiler seems obscenely difficult. To that degree, I think any interaction with any sort of heap-y thing like iterating over a Vec would have to error the compiler if used in a Future context.
_everything_ would have to be willing to yield. Not sure I like it. Interesting thought experiment though. I imagine some GC languages do exactly this.