There are a lot. First of all, these LLMs do not learn in-situ. They are entirely static (apart from the prompt). To teach an LLM something new is an ex-situ process, more or less totally unrelated to the way it predicts. Contrast that with a brain: brains are constantly learning (in fact, it is difficult to imagine how a brain as we understand it could work without constantly learning).
In a related way, because we learn on-line and constantly, our brains have to also maintain goals, rewards and punishments, etc etc. We have neurons for all of the trivia of keeping us moving, seeking new input, generalizing it, throwing away bad information, etc. For an LLM all of that is external. The LLM doesn't have any reason to even distinguish between generation and training. All the weight updates are calculated by a (relatively simple) external process. Furthermore, LLMs are entirely _feed forward_. The input comes in, a lot of numbers are crunched, and then output comes out. There is no rumination (again, the analogy for rumination in an LLM is in the training process, which is not embodied in the LLM).
Much of the content of our consciousness is perceptions relating to all of these things. I think its possible that artificial neural networks may one day do enough of these things that I would admit they are conscious, but architecturally and fundamentally, I don't see any reason that an LLM would have them.
I also don't think even GPT4 is that intelligent (fantastic recall, though). It does an impression of a cognitive process (literally by printing out steps) but that doesn't seem compelling enough for me to imagine a theory of mind underneath. A model of text, sure, but not a mind.