You are right, there is still the experiential question of consciousness.
I.e. qualia. Whether that is of a color, the cold, or how it feels to be self-aware.
I think that will become tractable with AI, since we will be able to adjust what their mind has access to vs. what it doesn’t. Experiment with various levels of information awareness.
Qualia would seem to be a functionally necessary result of a self-awareness dealing with information encoded at a lower level, with no access to that encoding.
Information is provided, the coding is inaccessible, so it gets perceived as … something which it cant decompose. So can’t describe.
We are made aware of a signal we perceive as red, but cannot mentally decompose perceptual “red” into anything else.
So the question is, how does differently encoded information provided to our self-aware level, without any access to the coding, get interpreted as the specific qualities we perceive?
Wild thoughts:
Once we understand how qualia emerge, what would the qualia space” look like? What are the rules or limits?
Will we be able to somehow analyze a creature and infer its qualia in a way that we (or an AI) can then experience directly?
Will designing qualia for AI be a thing? A useful thing? Or are they simply isomorphic to the type and relationships of data they represent? Or just a temporary glitch on the way to more fully self-aware, self-observable, self-designed life?