A classic tell of this is people handling out of bounds errors in loops by trying to randomly add or subtract 1 from their for-loop parameters.
I realized that they didn't have a mental model for what a loop did, they had simply memorized the syntax for a loop and were doing advanced pattern matching. Code repeats = write the for-loop syntax I've memorized. And then after seeing that fail with out of bounds exceptions, they learned a new rule: modify the loop parameters and see if that fixes the problem.
When I think about how I write code, or I compare their approach to the other cohort of students I saw, it's a different process. I see in my mind's eye a type of 'machine' that performs the actions that I want to take place. I simulate running that machine in my mind and tweak its design until it works the way I want it to. Only then do I think about syntax and try to translate what's already happening in my mind into source code.
I've seen people get shockingly far into software engineering careers using the pattern matching / guess and check approach. I've wondered if a lot of the handwringing you see on programming forums about the 'leetcode grind' is coming from people who do this pattern matching approach. To them it must seems like the only way to solve these problems is to simply train their internal pattern matching neural networks on huge amounts of examples.
The code that I see GPT generate looks eerily similar to what I saw from those programmers. And that makes sense because I think that functionally they're doing the same thing. Only GPT does it at a superhuman level.
That seems to me to indicate that there's something that at least some humans do with a mental model that our current LLMs lack. If someone figures out how to simulate those mental processes in a computer program I think we'll see a huge inflection point and that's what the original comment (as I read it) is referring to.