There’s a good reason why schools spend so much time training that skill!
It is easy to see why, since the LLM doesn't communicate what it thinks it communicates what it thinks a human would communicate. A human would explain their inner process, and then go through that inner process. An LLM would explain a humans inner process, and then generate a response using a totally different process.
So while its true that humans doesn't have perfect introspection, the fact that we have introspection about our own thoughts at all is extremely impressive. An LLM has no part that analyzes its own thoughts the way humans do, meaning it has no clue how it thinks.
I have no idea how you would even build introspection into an AI, like how are we able to analyze our own thoughts? What is even a thought? What would this introspection part of an LLM do, what would it look like, would it identify thoughts and talk about them the way we do? That would be so cool, but that is not even on the horizon, I doubt we will ever see that in our lifetime, it would need some massive insight changing the AI landscape at its core to get there.
But, once you have that introspection I think AGI will happen almost instantly. Currently we use dumb math to train the model, that introspection will let the model train itself in an intelligent way, just like humans do. I also think it will never fully replace humans without introspection, intelligent introspection seems like a fundamental part to general intelligence and learning from chaos.