I can put your brain in a vat and stimulate your sensory neurons with a statistical distribution with no actual meaning, and nothing about how your brain works would change either.
The LLM and your brain would attempt to interpret meaning with referent from training, and both would be confused at the information-free stimuli. Because during "training" in both cases, the stimuli received from the environment is structured and meaningful.
So what's your point?
By the way, pretty sure a neuroscientist with 20 years of ML experience has a deeper understanding of what "meaning" is than you do. Not to mention, your response reveals a significant ignorance of unresolved philosophical problems (hard problem of consciousness, what even is meaning) which you then use to incorrectly assume a foregone conclusion that whatever consciousness/meaning/reasoning is, LLMs must not have it.
I'm partial to doubting LLMs as they are now have the magic sauce, but it's more that we don't actually know enough to say otherwise, so why state that we do know?
We can't even say we know our own brains.