The network bandwidth between nodes is a bigger limitation than compute. The newest Nvidia cards come with 400gbit busses now to communicate between them, even on a single motherboard.
Compared to SETI or Folding @Home, this would work glacially slow for AI models.