A reasonable explanation is that a few neurons probably don't have conscience so they can't really experience anything.
I think once they're able to put 15 million such neurons on a single device that puts them in the range of more relatable animals like mice and Syrian hamsters, and I also expect that relatability is also what will drive most opinions about consciousness.
Given our piss poor understanding of consciousness, I have to ask: on what grounds do you make this claim?
Doom. (Obviously.)
What mechanism are you imagining that would allow a LLM built of neurons to describe what it's like to be made of neurons, when a LLM built of GPUs cannot describe what it's like to be organised sand? The LLM in the GPU cluster is evaluated by performing the same calculations that could be performed by intricate clockwork, or very very slowly by generations of monks using pencil and paper. Just as the monks have thoughts and feelings, it is conceivable (though perhaps impossible) that the brain tissue implementing a LLM has conscious experience; but if so, that experience would not be reflected in the LLM's output.
We can't assume that a computer based neural network will have the same emergent behaviours as a biological one or vice versa.
The interesting point for me is in the neuroplasticity, because it implies that the networks which are specialised for language could start forming synapses which connect them to the parts which are more specialised to play doom giving rise to the possibility that this could be used for introspection