Considering that in its initial demo, on very anodyne and "normal" use cases like "plan me a Mexican vacation" it spit out more falsehoods than truth... this seems like a problem.
Agreed on the meta-point that deliberate tool mis-use, while amusing and sometimes concerning, isn't determinative of the fate of the technology.
But the failure rate without tool mis-use seems quite high anecdotally, which also comports with our understanding of LLMs: hallucinations are quite common once you stray even slightly outside of things that are heavily present in the training data. Height of the Eiffel Tower? High accuracy in recall. Is this arbitrary restaurant in Barcelona any good? Very low accuracy.
The question is how much of the useful search traffic is like the latter vs. the former. My suspicion is "a lot".