I've had very few issues with that. Mostly the async code looks almost identical to the synchronous variant with Kotlin. There's a bit of a learning curve and it helps if you understand parallel and asynchronous programming a little. There is no magic with blocking vs. non blocking. Your IDE will tell you if you are doing it wrong. The fix is usually to have a co-routine context backed by a thread pool and schedule co-routines that block there.
Loom won't make any of this obsolete. If anything, co-routines will probably be the most user friendly API to use Loom when it comes available. Loom is a pretty low level API and you probably should not use it directly and instead use something that uses that. Like Kotlin co-routines, which is designed as a high level API that can be backed by all sorts of asynchronous and parallel computing implementations/
Extension functions exist for webflux, rx-java, vert.x, thread pools and much more. Basically, it sits on top of these things. It also works on kotlin native, kotlin-js (in a browser and node-js) and in the JVM where the underlying platform of course provides very different implementations. For the end user, it works pretty much in the same way across these platforms.
Loom will just be another low level backend for the co-routine API. It will likely give you some performance benefits if it's available and that's about it. Update the co-routine library, maybe fiddle a bit with your co-routine scope's and you'll be using Loom. I'd expect very little code changes would be needed when the time comes. Perhaps none at all actually (aside from bumping a version number).