But there are related, slightly better (more immediately testable), ideas in the same space, and one such is a "behavioral zombie" — behaviorally indistinguishable from a human.
For example: The screen I am currently looking at contains a perfect reproduction of your words. I have no reason to think the screen is conscious. Not from text, not from video of a human doing human things.
Before LLMs, I had every reason to assume that the generator of such words, would be conscious. Before the image, sound, and video generators, same for pictures, voices, and video.
Now? Now I don't know — not in the sense that LLMs do operate on this forum and (sometimes) make decent points so you might be one, but in the sense that I don't know if LLMs do or don't have whatever the ill-defined thing is that means I have an experience of myself tapping this screen as I reply.
I don't expect GenAI to be conscious (our brains do a lot even without consciousness), but I can't rule the possibility out either.
But I can't use the behaviour of an LLM to answer this question, because one thing is absolutely certain: they were trained to roleplay, and are very good at it.
To reduce a system to its inputs and outputs is fine if those are all that matter in a given context, but in doing so you may fail to understand its internal mechanics. Those matter if you're trying to really understand the system, no?
yes.
> To reduce a system to its inputs and outputs is fine if those are all that matter in a given context
we argue that this indeed is all that matters
> but in doing so you may fail to understand its internal mechanics
the internal mechanics are what we call "conscious" it is the grouping of internal mechanics into one unified concept, but we don't care exactly what they are.
> Those matter if you're trying to really understand the system, no?
since we cannot directly observe consciousness, we are forced to concede that we will never really "understand" it outside of observing its effects.
In the same way that a mechanical turk human and a robot can "play chess", a human and an LLM are "conscious". That is, consciousness is the ability to play chess, by some mechanism. The exact mechanism is irrelevant for the purposes of yes/no conscious.
We now enter a discussion on how much these two consciousnesses differ.
Why? You are using a definitive term ("never") to something that we might achieve in a future. We might observe consciousness in a future. Who knows? Consciousness is a known unknown. We know there is something but we don't know how to observe it properly and how we could eventually copy it.
In the meanwhile, we are not copying consciousness, we have a shallow replication of its output. When cavemen replicated the fire that they observed as the output of a lightning, did they master electricity?
But we do agree that it exists. Our direct experience tells us so.
> we are forced to concede that we will never really "understand" it outside of observing its effects.
Not necessarily. A gap in our ability to observe something does not imply that (a) we never will observe it or (b) what we don't know is not worth knowing.
Throughout history, persistent known-unknowns have pushed people to appeal directly to the supernatural, which short-circuits further discovery when they stop there. But the real fallacy is saying "we don't know, and it doesn't matter". That's a far more direct short-circuit to gaining knowledge. And in both cases, a lack of curiosity is an underlying problem.