Assuming money even makes sense in a world with AGI, that is.
That's what the competition with OpenAI looks like to me. There are at least three other American companies with near-peer models plus strong open-weights models coming from multiple countries. No single institution or country is going to end up with a ruling-the-Earth lead in AI.
Fanciful, yes, but that is the AI fantasy.
With AI, I think there is extremely strong power laws that benefit the top performing models. The best model can attract the most users, which then attracts the most capital, most data, and best researchers to make an even better model.
So while there is no hard moat, one only needs to hold the pole position until the competition runs out of money.
Also, even if no single AI company will rule the earth, if AI turns out to be useful, the AI companies might get a chunk of the profits from the additional usefulness. If the usefulness is sufficiently large, the chunk doesn't have to be large percentually to be large in absolute terms.
As for a post-money world, if AGI can do every economically viable thing better than any human, the rational economic agent will at the very least let go all humans from all jobs.