Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.
You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?
Indeed, those are exactly the questions you need to ponder.
It might also help to consider that human brain itself is made of cells, and cells are made of various pieces that are all very obviously machines; we're able to look as, identify and catalogue those pieces, and as complex as molecular nanotech can be, the individual parts are very obviously not alive by themselves, much less thinking or conscious.
So when you yourself are engaging in thought, such as when writing a comment, what exactly do you think is alive? The proton pumps? Cellular walls? The proteins? If you assemble them into chemically stable blobs, and have them glue to each other, does the resulting brain become conscious?
> Even if you're a materialist, surely you think there is a difference between a human brain and a brain on a lab table.
Imagine I'm so great a surgeon that I can take a brain out someone, keep in on a lab table for a while, and then put it back in, and have that someone recover (at least well enough to they can be interviewed before they die). Do you think is fundamentally impossible? Or do you believe the human brain somehow transmutes into a "brain on a lab table" as it leaves the body, and then transmutes back when plugged back in? Can you describe the nature of that process?
> You take a dead persons brain, run some current through it and it jumps. Do you believe this equivalent to a living human being?
Well, if you the current precisely enough, sure. Just because we can't currently demonstrate that on a human brain (though we're getting pretty close to it with animals), doesn't mean the idea is unsound.
Your brain is ultimately just numbers represented in neuronal form. What's conscious, the neurons?
FWIW I'm a hardcore idealist, but in the way it was originally posed, not in the quasi-mystical way the Hegelians corrupted it into.
Why should one be more valid than the other?
Yes, it's almost a perfect conflict of interest. Luckily that's fine, because we're us!
There is a valid practical difference, which you present pretty much perfectly here. It's a conflict of interest. If we can construct a consciousness in silico (or arguably in any other medium, including meat - the important part is it being wrought into existence with more intent behind it than it being a side effect of sex), we will have moral obligations towards it (which can be roughly summarized as recognizing AI as a person, with all moral consequences that follow).
Which is going to be very uncomfortable for us, as the AI is by definition not a human being made by natural process human beings are made, so we're bound to end up in conflict over needs, desires, resources, morality, etc.
My favorite way I've seen this put into words: imagine we construct a sentient AGI in silico, and one day decide to grant it personhood, and with it, voting rights. Because of the nature of digital medium, that AGI can reproduce near-instantly and effortlessly. And so it does, and suddenly we wake up realizing there's a trillion copies of that AGI in the cloud, each one morally and legally an individual person - meaning, the AGIs as a group now outvote humans 100:1. So when those AGIs collectively decide that, say, education and healthcare for humans is using up resources that could be better spent on making paperclips, they're gonna get their paperclips.
This materialist world view is very dangerous and could lead to terrible things if you believe numbers in a computer and a human being are equivalent.
And what are those atoms are made of? Just a bunch of quantum numbers in quantum fields following math equations.
A better question is: did this dead brain briefly wake up and experience anything?
AI aren't alive, and aren't humans. So what?