You nailed what I find discomforting about these discussions. They’re incredibly narrowly focused on a specific implementation that satisfies hitherto unsolved problems by pointing out its doesn’t do already solved problems. But surely folks realize the human brain isn’t a single monolithic processing program but an ensemble of specialized subsystems that organize to form the mind. Why would you assume you wouldn’t do the same with AI systems? We’ve been tackling reasoning, inference, problem solving, information retrieval, mathematics, logic, and other domains for decades with some stupendous results. But they lacked the ability to ingest and translate language into some intermediate semantic form and take output and reconstruct it into a human language. Likewise vision, and audio processing and input output has been a struggle until recently.
I also really strongly disagree that it’s basically doing some sort of information retrieval design where based on language it regurgitates some sort of markov expectations. You can ask it to do very complex translations of a concept from one domain to another and expressed in a form that’s certainly never been done before and it does it with alacrity. At the very minimum it “remembers” things from the past in the conversation and can associate the semantic ideas across prompts and synthesize cogent responses - that in itself implies it has some semantic “understanding” of the structure of the language. That is a huge missing piece in our tool kit to date.
Frankly I feel these threads expose just how jaded and unable to dream we have become, that even when a wonder walks up and hits you in the nose we can’t even see it.