Simple. I have two sets of data I can pull from to validate a claim an LLM makes. I have the linguistic corpora we produce (artificial memory, analogical to latent space built by an LLM). You are correct in that this modality is shared. I also, however, have internal self-narrative and experiential state that is
non-linguistic, but sensory/perception driven. An LLM can try to convince me that a bunch of mathematicians would come up with a system that requires one to make many copies of the same bitwise representation of a block for loading by the execution framework due to munging of the latent space via quantization. However, I have recollections of my time amongst Mathematicians and theorists. I can replay my lived perceptions of those times, and analyze and extract new meaning from them as my neural hardware evolves. Therefore, when that claim is made, my validation of the world as she is comes to a screeching halt to the tune of a recollection of a calculus class where the entire point is to pound into you the utility of fungibility of mathematical representations (substitution), and a further connection to optimization (replace entire cluster of an equation with a letter to process other things first and deal with the internal details later). That synthesizes also to the principle Mathematicians are both lazy, and clever. Alias that bitch, and moving right along. LLM's don't have that without you deliberately injecting that mechanism into their context. They'll in fact just run off the rails.
Now, could an equivalent process be modelled at some point? Probably. It'd be a conscious decision to do so on our part, and given fears over the AI Alignment quandary, it seems a rather fraught direction to carelessly proceed.