Reasoning properly is at least operating through processes that output correct results.
> Borges' library
Which in fact is exactly made of "non-texts" (the process that produces them is `String s = numToString(n++);` - they are encoded numbers, not "weaved ideas").
> many other problems which require an intelligence don't require the ability to solve these problems
Which ones? Which problems that demand producing correct solutions could be solved by a general processor which could not solve a "detective game"?
You didn't like "what colour is the sky" (without looking), ok. "Given the following [unseen during training] page of text, can you guess what emotion the main character is feeling at the end?" This is a problem that a human can solve, and many LLMs can solve, even if they can't solve the detective puzzle. In case it doesn't sound important, this can be reframed as a customer-service sentiment-recognition problem.
(I'd instead guess that you tried to reply before the timer - which allows HN members to reply after a delay proportional to a function of the depth of the discussion tree - allowed you.)
> do is not reasoning
What some people do is «not reasoning», for lack of training, or for lack of resources (e.g. time - Herbert Simon's "satisficing"), or for lack of ability. I had to write since the late 2022 boom that "if the cousin you write about is consistently not using the faculty of intelligence you can't call her "intelligent" for the purpose of this discussion". I have just written in another parallel discussion that «There is a difference between John who has a keen ethical sense, Ron who does not exercise it, and Don who is a clinical psychopath with missing cerebral modules making it completely Values-blind» - of course if we had to implement ethics we would "backward engineer" John and use Don as a counter-model.
> can you guess what emotion
Let me remind you my words: «Without proper reasoning you can get some "heuristic", which can only be useful if you only needed an unreliable result based on "grosso modo" criteria». Is that problem one that has "true solutions" or one that has "good enough solutions"?
Let me give another example. Bare LLMs can be "good" (good enough) e.g. in setting capitalization and punctuation in "[a-z0-9 ]" texts, such as raw subtitles. That is because they operate without explicitly pondering the special cases in which it is subtle to unequivocally decide whether the punctuation there "should have been a colon or a dash", and such cases are generally rare, so heuristic seems to suffice.
Similar engines are useless and/or dangerous in all cases in which correct responses are critical. Important problems are those which require correct responses.
According to your definition of reasoning, which involves surely getting the right answer, no human does reasoning. Probably less than 1% of published mathematics meets the definition.
> Important problems are those which require correct responses.
There are many important problems where formal reasoning is not possible, and yet a decision is required, and both humans and LLMs can provide answers. "Should I accept this proposed business deal / should I declare war / what diagnostic test should I order?" We would like to have correct responses for these problems, but it is not possible, even in principle, to guarantee correctness. So yes, we use heuristics and approximate reasoning for such problems. Is an LLM "unreliable" or "dangerous" in such problems? Maybe yes, and maybe more so than humans, but maybe not, it depends on the case. To try to keep the point of the thread in focus, an LLM should probably not try to solve such problems by writing code.
Human "reasoning" (ie speech or self-talk that sounds a bit like reasoning) often outputs correct results. Does "often" fit the definition?
> Which problems that demand producing correct solutions could be solved by a processor which could not solve a "detective game"?
For example, "what colour is the sky right now?". A lot of people could solve this (even if they haven't looked outside), and so could a lot of language models, which can't solve this detective game.
No: "proper reasoning" is that process which given sufficient input will surely bring to a correct output owing to the effectiveness of its inner workings.
> what colour is the sky right now
That is not a general problem solver, and "output the most common recorded reply to a question" is certainly not a general problem solver, and the responses from the box indicated will easily be worthless for all special cases in which the question will make sense.