From a computer science point of view: a single prompt/response cycle from a LLM is equivalent to a pure function; the answer is a function of the prompt and the model weights and is fundamentally reducible to solving a big math equation (in which each model parameter is a term.)
It seems almost self evident that "reasoning" worthy of the name would involve some sort of iterative/recursive search process, invoking the model and storing/reflecting/improving on answers methodically.
There's been a lot of movement in this direction with tree-of-thought/chain-of-thought/graph-of-thought prompting, and I would bet that if/when we get AGI, it's a result of getting the right recursive prompting pattern + retrieval patterns + ensemble models figured out, not just making ever-more-powerful transformer models (thought that would certainly play a role too.)
The LLM isn't the whole brain. Just the area responsible for language and cultural memory.