My point was that it's not a matter or DirectML or torch, it's simply a choice of backend for torch. It's an easy way of adding AMD support to torch based projects in Windows, there's probably an equally easy way of adding ROCm support in Linux. It's just that using cpu or Cuda is built in and usually the two default options when writing torch code, and somebody have to care enough to explicitly add AMD support.
It's not exactly as easy as just changing one line by the way, not all operations are implemented so there's some testing and maybe some rewrites needed. Hopefully the GPU backend mess gets solved in the general case soon.