That terminology isn't used at all in GPGPU compute APIs specifically tailored for that purpose, which use quite different programming models where you can mix host and device code in the same program.
And there are "GPUs" today that can't do graphics at all (AMD MI100/MI200 generations) or in a restricted way (Hopper GH100) which has the fixed function pipeline only on two TPCs, for compatibility, but running very slowly due to that.
There's absolutely a lot of "graphics" terminology that spills into GPGPU. For example, texture memory in CUDA :) The reality is that GPU's, even the ones that can't output video, are ultimately still using hardware that largely is rooted in gaming. Obviously the underlying architectures for these ML cards are moving away from that (increasingly using more die space for ML related operations) but many of the core components like memory are still shared. It boils down to the fact that at the end of the day they're linear algebra processors.
I'd say that there has been quite some sharing between both back and forth. Evolutions in compute stacks shaped modern graphics APIs too.
Texture units are indeed a part that is useful enough to be exposed to GPGPU compute APIs directly. The "shader" term itself disappeared quite early in those though, as did access to a good part of the FF pipeline including the rasterisers themselves.