Er, should it? Even of you trained an LLM exactly over this type of question, if the sequence to predict in the training data is really random, then any output is equally wrong. Even if the output is a fixed "1234" or "0000". There is no signal to train on, not even one that favours an equal distribution.
On the other hand, LLMs show they know very well what a random number is and the fact it just shouldn't look like anything in particular, so they strive to come up with a number that doesn't look like anything in particular. Which happens to be always the same number given the starting conditions.