LLMs fail at so many reasoning tasks (not unlike humans to be fair) that they are either incapable or really poor at reasoning. As far as reasoning machines go, I suspect LLMs will be a dead end.
Reasoning here meaning, for example, given a certain situation or issue described being able to answer questions about implications, applications, and outcome of such a situation. In my experience things quickly degenerate into technobabble for non-trivial issues (also not unlike humans).