I mean, it's less exciting when you break that down into the what it means for a person to be aware of themselves. For me, it breaks down to neurons & symbols, well-known scientific domains. While I may not be right, it's not difficult to come up with plausible explanations for the phenomena humans experience. Most of the mystery comes from it being very difficult for most people to even define consciousness. Much of the syntax we have for it (in the west, at least) is cobbled together from various religions, spiritualities, and extremely, extremely dense philosophers read by few and understood by fewer.
You might find Richard Hofstadter's Book "I Am A Strange Loop" illuminating if you find my explanation meager.
I'm not aware of "neurons and symbols". I'm aware of feeling, sight, smell, etc.. The fact that consciousness is an experience is what's fundamentally at odds with building a consistent model. Modeling cognition is the easy part.
it is just an obvious conclusion once you observe the continuous chain of living matter between first clumps of amino acids in primordeal soup 2B+ years ago and you. There is just no point where one can say that consciousness didn't exist before and started exist immediately after. It was just increasing as the complexity of the system increased, and will continue to increase beyond humans. What we call consciousness, the carriers of future consciousness would look at like we look at lizards' consciousness today.
>than any implementation would have to be conscious, the often quoted simulation of a brain inside a computer as well as a gigantic pile of levers, gears and pulleys in the right arrangement.
giving that these examples are simpler than a simple cell, it is no surprise that we don't observe any noticeable consciousness in these examples.
>The thought that a collection of gears can become aware of its existence seems, at least to me, pretty outlandish.
Can a collection of gears, given enough size and complexity, behave in a way as to attempt to maximize entropy it would generate over its whole period of existence? Can it make a copy of itself with modifications as to increase the target entropy achievable by the copy? This 2 "can"s is actually what defines the living matter. Consciousness is just the emerging algorithm of maximizing the target entropy. Self-awareness is just ability of algorithms to observe its own "subroutines".
What does it mean to observe? If we design an algorithm that observes its internals according to your definition, is it going to be self-aware?