But:
- In this context, following on the whole 2nd half of the 20th century where cognitive science and psychology moved past behaviorism and sought explanations of the _mechanisms_ underlying mental phenomena, a scientific discussion doesn't have to restrict itself to only considering what the LLM says. Neither we, nor the LLM are black boxes. Evidence of _how_ we do what we do is part of scientific inquiry.
- But the LLM does _not_ reproduce all the behaviors of an agent with a theory of mind. A two year-old with a developing theory of mind may try to hide food they don't want to eat. A 4-year-old playing hide-and-seek picks locations where they think their play-partner won't look. They take _actions_ which are appropriate for their goals and context which require consideration of the goals of others. The LLM shows elaborate behaviors in one dimension, in which it has been extensively trained. It has no capacity to do anything else, or even receive exposure to non-linguistic contexts.
I am in no way arguing that only meat-based minds can "know". I'm saying that the data, training regime and model structure used for LLMs specifically is extremely impoverished, in that we show it language but no other representation of the things language refers to. Similarly, image-generating AIs know what images look like, but they don't know how bodies or physical objects interact, because they have never been exposed to them. Of _course_ we get LLMs that hallucinate and image-generators that produce messed up bodies.
On the other hand, there are some pretty cool reinforcement-learning results where agents show what looks like cooperation, develop adversarial strategies, etc. There's experiments where software agents collaboratively invent a language to refer to objects in their (virtual) environment to accomplish simple tasks. I think there are a lot of near and medium-term possibilities coming from multi-modal models (i.e. can models trained on related text, images, audio, video) and RL which could yield knowledge of a kind that LLMs simply do not have.