You are right, I was wrong. I think anthropomorphizing LLMs to begin with is kind of silly. The whole "LLMs are closer to people than to machines" comparison is misleading, especially when the argument comes down to output variability.
Their outputs can vary in ways that superficially resemble human variability, but variability alone is a poor analogy for humanness. A more meaningful way to compare is to look at functional behaviors such as "pattern recognition", "contextual adaptation", "generalization to new prompts", and "multi-step reasoning". These behaviors resemble aspects of human capabilities. In particular, generalization allows LLMs to produce coherent outputs for tasks they were not explicitly trained on, rather than just repeating training data, making it a more meaningful measure than randomness alone.
That said, none of this means LLMs are conscious, intentional, or actually understanding anything. I am glad you brought up the seed and determinism point. People should know that you can make outputs fully predictable, so the "human-like" label mostly only shows up under stochastic sampling. It is far more informative to look at real functional capabilities instead of just variability, and I think more people should be aware of this.