> The conclusion drawn from it, not so much. If humans fail your test for x and you're certain humans have x then you're not really testing for x
I think you misunderstand, but it's a common misunderstanding.Humans have the *ability* to reason. This is not equivalent to saying that humans reason at all times (this was also started in my previous comment)
So it's none of: "humans have x", "humans don't have x", nor "humans have x but f doesn't have x because humans perform y on x and f performs z on x".
It's correct to point out that not all humans can solve this puzzle. But that's an irrelevant fact because the premise is not that human always reason. If you'd like to make the counter argument that LLMs are like humans in that they have the ability to reason but don't always, then you got to provide strong evidence (just like you need to provide strong evidence that LLMs can reason). But this (both) is quite hard to prove because humans aren't entropy minimizers trained on petabytes of text. It's easier to test humans because we generally have a much better idea of what they've been trained on and we can also sample from different humans that have been trained on different types of data.
And here's a real kicker, when you've found a human that can solve a problem (meaning not just state the answer but show their work) nearly all of them can adapt easily to novel augmentations.
So I don't know why you're talking about trickery. The models are explicitly trained to solve problems like these. There's no slight of hand. There's no magic tokens, no silly or stage wording that would be easily misinterpreted. There's a big difference between a model getting an answer wrong and a promoter tricking the model.