>Yes, it will be confused as well, and for all outwards observable signs will fail to make sense of the stimuli, yet it will "aware" of its inability to understand, much like a human brain would.
>If you doubt that, open a new session and type some random tokens, you will get the answer that it's confused.
There is no empirical evidence of any awareness whatsoever in any LLM, at all. Even their most immersed creators don't make such a claim. An LLM itself saying anything about awareness doesn't mean a thing. It's literally designed to mimic in such a way. And you speak of discussions of consciousness being about the philosophical and unanswerable?
At least when talking about human awareness, one applies these ideas to minds that we personally as humans perceive to be aware and self-directed from our own experience (flawed as it is). You're applying the same notion to something that shows no evidence of awareness while then criticizing assumptions of consciousness in a human brain?
Such a sloppy argument indeed does make appeals to authority necessary I suppose.