AMD's ROCm just isn't there yet compared to Nvidia's CUDA. I tried it on Linux with my AMD GPU and couldn't get things working. AFAIK on Windows it's even worse.
That entirely depends on what AMD device you look at: gaming GPUs are not well supported, but their instinct line of accelerators works just as well as cuda. keep in mind that, in contrast to Nvidia, AMD uses different architectures for compute and gaming (though they are changing that in the next generation)
Microsoft and Meta are running customer facing LLM workloads on AMD's graphics cards. Oracle seems to like them too. Google is doing the TPU thing with Broadcom and Amazon seems to have decided to bet on Intel (in a presumably fatal move but time will tell). We'll find some more information on the order book in a couple of weeks at earnings.
I like that the narrative has changed from "AI only runs on Cuda" to "sure it runs fine on AMD if you must"