And of course they also miss things like embodiment, mirror neurons etc.
If an LLM makes a mistake, it will tell you it is sorry. But does it really feel sorry?
And what does it mean to feel sorry? Beyond fallible and imprecise human introspective notion of "sorry", that is. A definition that can span species and computing substrates. A deanthropomorphized definition of "sorry", so to speak.