I think that there are plenty of competitors in the "LLMs with open weights" space to essentially make the models a commodity, so all that is left is the compute cost and there is no way that someone will be running a datacenter in a way that is cheaper than "the computer that I already have running on my desk".