I'm not sure what that means specifically. I don't agree overall. Only certain types of problems encountered by LLMs map cleanly to well-understood problems where existing solvers are perfect.
All problems require proficient reasoning to get a proper solution - not only puzzles. Without proper reasoning you can get some "heuristic", which can only be useful if you only needed an unreliable result based on "grosso modo" criteria.
Right, but the question is whether this is good enough. And what counts as "proper". A lot of what we call proper reasoning is still quite informal, and even mathematics is usually not formal enough to be converted directly into a formal language like Coq.
So this is a deep question: is talking reasoning? Humans talk (out loud, or in their heads). Are they then reasoning? Sure, some of what happens internally is not just self-talk, but the thought experiment goes: if the problem is not completely ineffable, then (a bit like Borges' library) there is some 1000-word text which is the best possible reasoned, witty, English-language 1000-word solution to the problem. In principle, an LLM can generate that.
If your goal is a reductio, ie my statement must be false since it implies models should write code for every problem - then I disagree, because while the ability to solve these problems might be a requirement to be deemed "an intelligence", nonetheless many other problems which require an intelligence don't require the ability to solve these problems.
Reasoning properly is at least operating through processes that output correct results.
> Borges' library
Which in fact is exactly made of "non-texts" (the process that produces them is `String s = numToString(n++);` - they are encoded numbers, not "weaved ideas").
> many other problems which require an intelligence don't require the ability to solve these problems
Which ones? Which problems that demand producing correct solutions could be solved by a general processor which could not solve a "detective game"?