This is a reverse anthropic fallacy. It may be true of a base model (though it probably isn't), but it isn't true of a production LLM system, because the LLM companies have evals and testing systems and such things, so they don't release models that clearly fail to understand things.
You're basically saying that no computer program can work, because if you randomly generate a computer program then most of them don't work.