Your attempt to trivialize it doesn't make any sense. It's like watching someone try to trivialize the moon landing. "Oh all we did was put a bunch of people in some metal cylinder then light the tail end on fire. Boom simple propulsion! and then we're off to the moon! You don't need any intelligence to do that!"
>I'm saying that it "understands" your query only insofar as its words can be tied to the web of associations it's memorized. The impressive part (to me) is that some of its concepts can act as facades for other concepts: it can insert arbitrary information into an HTML document, a poem, a shell session, a five-paragraph essay, etc.
You realize the human brain CAN only be the sum of it's own knowledge. That means anything creative we produce anything at all that comes from the human brain is DONE by associating different things together. Even the concept of understanding MUST be done this way simply because the human brain can only create thoughts by transforming it's own knowledge.
YOU yourself are a web of associations. That's all you are. That's all I am. The difference is we have different types of associations we can use. We have context of a three dimensional world with sound, sight and emotion. chatGPT must do all of the same thing with only textual knowledge and a more simple neural network so it's more limited. But the concept is the same. YOU "understand" things through "association" also because there is simply no other way to "understand" anything.
If this is what you mean by "reasoning by analogy" then I hate to tell you this, but "reasoning by analogy" is "reasoning" in itself. There's really no form of reasoning beyond associating things you already know. Think about it.
>But none of this shows that it can relate ideas in ways more complex than the superficial, and follow the underlying patterns that don't immediately fall out from the syntax. For instance, it's probably been trained on millions of algebra problems, but in my experience it still tends to produce outputs that look vaguely plausible but are mathematically nonsensical. If it remembers a common method that looks kinda right, then it will always prefer that to an uncommon method.
See here's the thing. Some stupid math problem it got wrong doesn't change the fact that the feat performed in this article is ALREADY more challenging then MANY math problems. You're dismissing all the problems it got right.
The other thing is, I feel it knows math as well as some D student in highschool. Are you saying the D student in highschool can't understand anything? No. So you really can't use this logic to dismiss LLMs because PLENTY of people don't know math well either, and you'd have to dismiss them as sentient beings if you followed your own reasoning to the logical conclusion.
>I mean, it's not utterly impossible that GPT-4 comes along and humbles all the naysayers like myself with its frightening powers of intellect, but I won't be holding my breath just yet.
What's impossible here is to flip your bias. You and others like you will still be naysaying LLMs even after they take your job. Like software bugs, these AIs will always have some flaws or weaknesses along some dimension of it's intelligence and your bias will lead you to magnify that weakness (like how you're currently magnifying chatGPT's weakness in math). Then you'll completely dismiss the fact that chatGPT taking over your job as some trivial "word association" phenomenon. There's no need to hold your breath when you wield control of your own perception of reality and perceive only what you want to perceive.
Literally any feat of human intelligence or artificial intelligence can literally be turned into a "word association" phenomenon using the same game you're running here.