I would have given similar examples to show that ChatGPT makes the same kinds of mistakes that humans do. The first one is good, because ChatGPT can solve it easily when you present it as a riddle rather than being a genuine question. Humans use context and framing in the same way; I'm sure you've heard of the Wason selection task: https://en.wikipedia.org/wiki/Wason_selection_task
When posed as a logic problem, few people can solve it. But when framed in social terms, it becomes apparently simple. This shows how humans aren't using fundamental abstract concepts here, but rather heuristics and contextual information.
The second example you give is even better. It's designed to trick the reader into thinking of the number 30 by putting the phrase "half my age" before the number 60. It's using context as obfuscation. In this case, showing ChatGPT an analogous problem with different wording lets it see how to solve the first problem. You might even say it's able to notice the fundamental abstract concepts that both problems share.
The third problem is also a good example, but for the wrong reason: I can't solve it either. If you had spoken it to me slowly five times in a row, I doubt I could have given the right answer. If you gave me a pencil and paper, I could work through the steps one by one in a mechanical way... but solving it mentally? Impossible for me.
> It is run through a grammatical filter/generator at the end so it's usually grammatical, but no sort of truth filter (or ethical filter for that matter either).
I kind of thought it did get censored by a sort of "ethical filter" (very poorly, obviously), and also I wasn't aware of it needing grammatical assistance. Do you remember where you heard this?
Here's my chat with it, if you're interested: https://pastebin.com/raw/hQQ8bpsB
But comparing 1 human to 1 GPT is mistaken to begin with. It's like comparing 1 human with 1 Wernicke's area or 1 angular gyrus. If you had 100 different ChatGPTs, each optimized for a different task and able to communicate with each other, then you'd have something more similar to the human brain.