Absolutely not, this is not remotely "clear", and it's a very strange thing to assert.
> The hallucinations don’t make it less intelligent because it’s not “trying” to avoid them, as you seem to know already
What? No. What does "as you seem to know already" mean in this context?
I guess it depends how you define intelligence but I guess I would say intelligence is the ability to find the best action to take to achieve a certain goal, and AI can do that reasonably well
> What does "as you seem to know already" mean in this context?
It means that based on the comment I was replying to the person seems to already understand what I just said
https://plato.stanford.edu/entries/chinese-room/#LargPhilIss...
"Searle could receive Chinese characters through a slot in the door, process them according to the program's instructions, and produce Chinese characters as output, without understanding any of the content of the Chinese writing."
Sure, but that doesn't mean the state of the program doesn't contain any understanding or intelligence, it's just that the human doesn't have a high-level view that can be used to decode that internal state. We're not asking whether the computer chip itself understands things but whether the something contained in the program running on it does. The human could also run a physics simulation as in https://xkcd.com/505/ and recreate a human brain which would be no different to a physical brain in terms of behavior and so there would be no reason not to call it intelligent
> but that doesn't mean the state of the program doesn't contain any understanding or intelligence
Programs don't contain understanding or intelligence, they contain instructions.
> We're not asking whether the computer chip itself understands things but whether the something contained in the program running on it does.
I feel like your saying "I'm not accusing the blender of being intelligent, I'm saying the recipe for this margarita is self aware." It doesn't matter if its hardware or software, neither is capable of understanding because understanding is a conscious experience and neither a blender nor a recipe are sentient.
> The human could also run a physics simulation
Cool XKCD but I'm not arguing about wether AI is possible. Just pointing out that convolutional neural networks are not self aware or intelligent or actually learning (at least not yet).