If they're not losing money on inference, then why do they need to keep raising absurd amounts of money? Like, if inference is profitable and they're still losing lots and lots of money, then training must be absurdly expensive, which means that basically they invest in quickly depreciating capital assets (the models) so not a good business.
I think Anthropic is an interesting case study here, as most of their volume is API and they don't have a very generous free tier (unlike OpenAI).