Never. You can to a first approximation model a GPU as a whole bunch of slow CPUs harnessed together and ordered to run the same code at the same time, on different data. When you can feed all the slow CPUs different data and do real work, you get the big wins because the CPU count times the compute rate will thrash what CPUs can put up for that same number, due to sheer core count. However, if you are in an environment where you can only have one of those CPUs running at once, or even a small handful, you're transported back to the late 1990s in performance. And you can't speed them up without trashing their GPU performance because the optimizations you'd need are at direct odds with each other.
CPUs are not fast or slow. GPUs are not fast or slow. They are fast and slow for certain workloads. Contra popular belief, CPUs are actually really good at what they do, and the workloads they are fast at are more common than the workloads that GPUs are fast at. There's a lot to be said for being able to bring a lot of power to bear on a single point, and being able to switch that single point reasonably quickly (but not instantaneously). There's also a lot to be said for having a very broad capacity to run the same code on lots of things at once, but it definitely imposes a significant restriction on the shape of the problem that works for.
I'd say that broadly speaking, CPUs can make better GPUs than GPUs can make CPUs. But fortunately, we don't need to choose.