I didn’t get to really explore the GAN generation of ML work particularly well since I had no supported hardware (no desire to support the nVidia monopoly on ML work) and refused to blow money on cloud instances I’d probably forget at some point and wind up with a giant bill.
It’s a really different world now I’ve got massive models running on my laptop thanks to Apple Silicon and the unified memory architecture, and the c++ ports of various diffusion image models and several families of large language text models work well on my AMD gpu too… it’s so much easier to participate in the current generation of applied ML work without having to go out of my way to have specific ML supported hardware.