Slightly off topic, but I'm horrified by the idea of ever creating consciousness in machines. Imagine we built a piece of software that could feel, and built controls for its emotions. I can't imagine it would take long before some bored teenager or sociopath, who in earlier years would torture individual squirrels or insects, created an infinite suffering machine. You could run thousands of instances of your suffering machine and simulate a holocaust on your desktop. That's not a power I would trust the world with.
Its a mechanistic argument, that says no machinery is capable of consciousness. How do you decide if software is conscious? For instance, it could print "I'm conscious! Oh! I'm suffering!"