https://arxiv.org/abs/2212.10559 shows a LLM is doing gradient descent on the context window at inference time.
If it's learning relationships between concepts at runtime based on information in the context window then it seems about as useful to say it is a Markov chain as it is to say that a human is a Markov chain. Perhaps we are, but the "current state" is unmeasurably complex.