The model of "someone will find this training computer useful" is... fine. Google TPUs, NVidia DGX, Intel Xe-HPC, AMD MI100, Cerebras wafer scale AI. These are computers that nominally are aiming for the market of selling computers / APIs / SDKs that will make training easier.
Its a pretty crowded field. Someone probably has struck gold (NVidia has a lead but... its still anyone's game IMO)
-------
If Tesla's goal is to compete against everyone else (or make a chip that's cost-competitive with everyone else), Tesla needs more volume than (allegedly) 3000 chips (quoted from the article: I dunno where they got this figure but... there's no way in hell 3k chips is cost-effective).
That's the name of the game: volume. The reason why NVidia leads is because NVidia sells the most GPUs right now, which means their R&D costs are applied to the broadest base, which means those company's engineering costs (aka: CUDA training) is spread across the widest number of programmers, leading to a self-reinforcing cycle of better hardware, lower costs, with a larger community of programmers to learn from.