AMD has great hardware, but they never could be assed to do anything about their software.
AMD did approximately nothing with ROCm.
Investing $10-20m of developer time into making ROCm work reliably easily would have paid for itself 100x.
I love when outsiders throw around random-ass takes like this. Just curious: how'd you come up with this number? Is it backed by literally any thought/data/roadmap?
Let's do some rough back of the envelope calculations: 20MM is 100 engineers working for 1 year. Or maybe it's 5 years of work for 20 engineers? Which one of those perspectives (if any!) sounds to you like a reasonable assessment of the gap between AMD and NVIDIA?
A quick reminder before you answer: whatever you think is actually involved in improving ROCm, unless you work on ROCm, you're almost certainly not considering an entire iceberg of complexity (runtime/driver/firmware).
Let's put it another way: forget AMD investing, I'll invest in you since you're so confident. I'll give you 20MM as a high-interest, non-dischargeable loan (say 8%) and all the runtime/driver/firmware source for AMDGPU. Up for it? All you have to do is improve ROCm such that it's competitive with CUDA and you can take home a huge slice of the TAM and you'll be rich. Easy right?
Cutting to the chase: you're off by at least two orders of magnitude on your goofy estimate; the real numbers are probably closer to 200MM invested every year for 10 years. And you still wouldn't be caught up because in those 10 years NVIDIA wasn't sitting on its laurels just waiting for you to catch up!