I'm just expressing here that my expectation was that this method would be less finicky than GAN because it uses an MSE loss, but unfortunately it seems to have its own difficulties. No silver bullet, I guess. The integration sampling can be quite sensitive to imperfections and diverge easily, at least in early stages of training.
I decided to write this because it feels like the early days of GAN where overall there seems to be lots of these "explain diffusion from scratch" type articles out there, but not yet a lot discussing common pitfalls and how to deal with them.