I was wondering if I could get a different way of thinking about reasoning machines as such. Reasoning models are trying to just externalize the reasoning through chain of thought or fine-tuning on reasoning focused dataset.
They all seem very hacky and not really reasoning. I wanted to see if there are alternative fundamental ways to think about reasoning as end by itself.