My point is, the question if an LLM reasons the same way a human does is about as useful as "does a submarine swim" or "can a telephone talk". The results speak for themselves.
That sounds like a false "both sides"-ing.
It's not symmetrical, there's a lot more money (and potential to grift) hyping things up as miracle machines.
In contrast, most of the pessimists don't have a discernible profit motive.
You have artists who've lost work due to diffusion models, teachers who can't assign homework essays anymore, people who hate Microsoft Copilot, just anyone not wanting to be replaced by a bot or being forced to use the tech to avoid being outcompeted, people set in their ways who don't want change or imagine it being destructive, etc. It's a large crowd that one can appeal to for personal gain, politics 101. Anyone with half believable credentials can go on a talk show and say the things people want to hear, maybe sell a book or two afterwards.
Are today's models on the brink of some exponential self perpetuating shot towards superintelligence? Obviously not. Are they overhyped glorified lookup tables? Also no. Are there problems? Definitely. But I don't think it's entirely fair to dismiss a tech based on someone misappropriating it in monopolistic endeavours instead of directing dismissal towards those people themselves.
Like, similar to how Elon's douchebaggery has tainted EVs for lots of people for no practical reason, the same has Altman's done for LLMs.
There have been enough cases of models providing novel results that it's clear that whatever human trait they supposedly lack they don't really need. A car does not need legs, it does things differently. Having legs would even be a major detriment and would hold it back from achieving its top performance.
That's what those brain simulating projects are conceptually btw: cars with legs or planes with flapping wings. That's why they all fail, the approach makes no sense.