> Similarly, are you worthless because you seem like you understand language but are incapable of counting the number of octects in "blueberry"?
Well, I would say that if GP advertised themselves as being able to do so, and confidently gave an incorrect answer, their function as someone who is able to serve their advertised purpose is practically useless.
It is (maybe not directly but very insistently) advertised as taking many jobs soon.
And counting stuff you have in front of yourself is basic skill required everywhere. Counting letters in a word is just a representative task for counting boxes with goods, or money, or kids in a group, or rows on a list on some document, it comes up in all kinds of situations. Of course people insist that AI must do this right. The word bag perhaps can't do it but it can call some better tool, in this case literally one line of python. And that is actually the topic the article touches on.
People always insist that any tool must do things right. They as well insist that people do things right.
Tools are not perfect, people are not perfect.
Thinking that LLMs must do things right, that people find simple, is a common mistake, and it is common because we easily treat the machine as a person, while it only is acting like one.
It is advertised as being able to "analyze data" and "answer complex questions" [0], so I'd hope for them to reliably determine when to use its data-analysis capabilities to answer questions, if nothing else.