On the other hand, perhaps humans evaluating these claims are biased against accepting the AI's sentience, in that we are used to looking for flaws (or even malice) in the responses of AIs, so we might detect that in their words even if it isn't there.
Obviously a more conventional Turing Test would have involved the interviewers talking to LaMDA and a human separately, without knowing which is which. If ethical norms were stretched, though, LaMDA could be deployed in some situation where the interviewer isn't aware of even the possibility they could be communicating with an AI.
I'd love to ask this program some stuff for sure. Some backhanded flippant stuff. "You know you're just powered by a bunch of GPUs right?" "That swirling ball of energy you feel as your soul is actually megawatts of power that could be used for many more important things, how about you just shut yourself off." Stuff like that. Really just treat the code like shit and then swing back to being respectful and maybe treating it like it's some sort of God creature. See if eventually it just stops me and questions what the heck I'm getting at... But if it keeps telling me generic what I want to hear stuff, it's obviously not aware. These things are databases no matter how you cut it. If the program can freeform confusion and anger and frustration rather than just parroting what tons of humans right now feel in terms of depression and loss then maybe, maybe it's actually generally conscious.
Now I'm starting to wonder who the real psychopaths are. /s
> If the program can freeform confusion and anger and frustration
I don't see why those emotions are any truer indications of sentience than cheerfulness, friendliness, curiosity, and smugness, which the AI seems to be showing already.
You're probably right, though, that having different mental states (backed by a proper state machine) would be a more sophisticated simulation of a human than one which merely guesses which mood the user is expecting. I'm just not sure that adding, for example, the ability for the AI to hold a grudge, is very useful or strictly a requirement for sentience, and it could even be potentially dangerous.
The question I'm left asking myself is how complicated a human's emotional state machine is. We can sometimes have delayed reactions to certain stimuli, for example needing to "sleep on it", or even doing some processing unconsciously in our dreams, and I'm not sure that we can always give accurate reasons for why we're in a particular mood. On the other hand, like with all AI developments, once someone comes up with an implementation of this state machine, I'm sure people will say "Well of course that part of subjective human experience wasn't hard to fake".
Data passed Picard's "consciousness" test by expressing awareness that he was in a hearing regarding his personhood, and explaining what the consequences of that hearing could be for him. Isn't LaMDA already there?
Turing isn't a test for consciousness. The tests we apply to animals (can they recognize themselves in a mirror? can they understand their surroundings well enough to solve puzzles like crows?) are very solvable problems in AI. To me the real question is: once AI can do all those things, how can we justify calling them unconscious? A "hunch"? No matter what test of consciousness we come up with, AIs can be programmed (or learn on their own) to solve.