there are google TPUs. Do they provide better pefrormance/dollar, or google also charges high margin, or Nvidia is doing some unique optimizations?
So now Nvidia is in the privileged position of having both highly-flexible GPGPU compute hardware, as well as a highly-advanced software layer to use it with. TPUs and NPUs are neat, but fundamentally they are neither of these things; they have an extremely limited processing pipeline exposed by a high-level library, and that's usually it. CUDA is comparatively flexible, to the point that it doesn't even rely on AI to sell it's product.
To me, hating on Nvidia feels like being mad that a well-bred horse with great odds beat out the jockey you were betting on. Why should we hate them, for their "monopoly" on features that Apple and Khronos gave up developing? Because they're blocking-out their competitors by... not having working MacOS drivers per Apple's request? This is the causal and obvious outcome of letting businesses commoditize specialized compute. This is what the industry wanted, and it's rich watching the customers protest like they were fooled into thinking everything was fine.
my understanding is that compilers can compile some straighforward JAX, TF, Pytorch programs to both Cuda and TPU, so they in direct competition in current hot topics (LLM, deep learning).
The math probably adds up in Google's favor with the TPUs, even if they end up being less efficient and slower per-unit than Nvidia hardware. They don't need to pay for the margins, and they can run them 24/7 for their intended purpose. The previous-generation TPUs can't be reused or resold for other purposes though, and if/when AI blows over as a trend you probably can't easily start mining crypto or doing HPC calculations like an Nvidia cluster would.
Is it because you don't need to buy many gpus to do your workload?
I could have written almost the same reasons for GPU workloads.