The better AI gets, the better the training techniques get, and the better the algorithms get will result in fewer processors needed to run something of use. All of the advances will end up in the public domain if not immediately after or before they are even implemented, soon after. There will not be many economies of scale between having 100M customers or 10K customers, so no way to keep out competitors. They will all compete on price. If the models get really, really good, a "good enough" model will end up running on your old laptop and you won't have to pay for anything.
Saying that AI will be productive - which is yet to be seen, I don't know how much polishing or complete rethinking your code will have to go through before it can ship as an actual product that you have to stand behind and support - is not the same as saying that AI will be profitable.
We actually don't even need that many computer programs. Hypothetically, a ton of excess LLM coding supply might allow us to take out a few layers of expensive abstraction from our current stacks, and make more code even less necessary. They kept telling us that all of that abstraction was a result of trying to save developer labor costs, right? If AI is productive and rentiers can't manage to extract that productivity due to competition, that equation changes.
In the end, we say that the dot com bubble resulted in a huge amount of productive capacity that we were later able to put to use. But that doesn't mean that putting a quarter of a billion 90s dollars into DrKoop.com was a good idea; nope, still dumb.