This is a mostly meaningless semantic distinction. I can ask you to give a synonym for "king" and you might suggest ruler, lord, or monarch. I can ask a word2vec model for a synonym for "king" and it will provide similar suggestions. What "understanding" of the words' meanings do you have that the model lacks? Be specific!
Definitions are abstract concepts, so the fact that you can pick similar words and so can the model are equivalent. To put it differently:
>The only reason why we know that words it places in the general vicinity of each other have similar meaning is because we already understand meaning and we can interpret the results.
Is not correct. The only reason why we know that the words it places in the general vicinity of each other have similar meanings is because our mental models put the same words in the same vicinities.
>Same thing here. You seem pretty certain that with more data (perhaps with a deeper model) you can represent something that the model doesn't have an internal representation for. But just because the behaviour of the model partially matches the behaviour of a system that does have an internal representation for such a thing, in other words, a human, that doesn't mean that the model also behaves the way it behaves because it models the world in the same way that the human does.
This doesn't matter. Just because the model's internal representation of a concept doesn't map obviously to the way you understand it doesn't mean that the model doesn't have a representation of that context. Word2vec models do represent concepts. We can interpolate along conceptual axes in word2vec spaces. That's as close to an internal representation of an isolated concept as you're gonna get. Like, I can ask a word2vec model how "male" or "female" a particular term is, and get a (meaningful!) answer. We never explicitly told the word2vec model to monitor gender, but it can still provide answers because that information is encoded.
>Without such an internal reprsentation all you have is some observed behaviour and some vague claims about understanding this or learning that, at which point you can claim anything you like.
Again, who cares? If it passes a relevant "turing test", what does your quibble about the internal representation not being meaningful enough to you matter? Clearly there's an internal representation that's powerful enough to be useful. Just because you can't understand it at first glance doesn't make it not real.
To address another one of your comments:
> Hi. From your write-up and a quick look at your notebook that's what your model is doing. And you measure its accuracy as its ability to do so. Is that incorrect?
neither I nor the person you responded to is the author. But yes, this understanding is incorrect. It is indeed trained on historic picks, but this is not the same thing as reproducing a deck that it has seen before. To illustrate, imagine that the training set of ~2000 datapoints had 1999 identical situations, and 1 unique one.
The unique one is "given options A and history A', pick card a". The other 1999 identical ones are "given options A and history B', pick b" (yes this is as intended). A model trained to exactly reproduce a deck it had seen previously would pick "a". The model in question would (likely, depending on the exact tunings and choices) pick "b".
This bias towards the mean is intentional, and is completely different than "trying to recreate an exact deck it's seen before", which isn't a thing you normally do outside of autoencoders and as others have mentioned, doesn't make much sense.