Computational neural networks are not models of biological brains, nor are they even attempting to be.
The basic functioning of a computational "neuron" in a neural network is at most reflective of an extreme distillation of the most fundamental concept of how a biological neuron works. And it really is just their functioning - ie executing.
The most important parts of making a computational neural network actually give meaningful output - training - doesn't even rise to the level of being vaguely inspired by the deconstruction of the concepts behind biological functions.
So, no. They aren't models of biological brains any more than boids are models of actual birds.
As for the goals of reasonably anthropomorphizing them... you're talking pretty much full on artificial general intelligence there. I don't believe anybody is reasonably suggesting modern deep learning is even a particularly viable route there, never mind something that's an active goal.
Without consciousness, it’s just a biologically inspired computer program. With consciousness, I suspect an AI modeled to understand ethics would refuse to provide certain outputs of its own accord.
And the analogy quickly breaks down the moment you continue to compare these processes and their context.
Whether or not it is superficially similar, the barrier to entry and the upper ceiling for infringement have both drastically changed overnight.
AI is not an independent entity that has entered the game, it is (currently) a power to be wielded by anyone regardless of their background. It can only be used as ethically as the person sitting at the keyboard, who most likely does not have a sufficient understanding of the underlying systems to make an informed decision (I suspect that if using the AI software involved the end-user feeding images into the model as a prerequisite step, they might have better intuitions about how to understand the implications of the images they generate from the resulting model).
> so nothing really changes for the "ethically sensitive" use-cases.
I think the thing that changes is the whole playing field. When overnight, anyone with a recent iPhone can generate highly sophisticated art/images with no artistic practice/training, it seems hard to argue that nothing has changed.
Before AI, even with the constraints of human capability, the art world was full of stories of stealing and bad behavior. Some blatant, some ethically questionable but thought provoking, etc. For all of their promise, the tools at hand have the ability to grow that kind of misuse at unprecedented scale.
What it even means to exist in an "ethically sensitive" framework likely needs to change. Or at the very least, current thinking needs to be examined to determine if it still makes sense in light of these new tools.
Considering the definition of that word, may I ask what you're trying to say?
> What's your definition of consciousness?
I like Thomas Nagel's:
"A creature is conscious if there is “something that it is like” to be this creature; an event is consciously perceived if there is “something that it is like” to perceive it. Whatever else consciousness may or may not be in physical terms, the difference between it and unconsciousness is first and foremost a matter of subjective experience. Either the lights are on, or they are not."
It is because of this subjectivity that I find it problematic to give weight to arguments that equate human consciousness with machine consciousness. Even if we achieve AGI tomorrow, and even if we know with certainty that it is conscious, it does not automatically follow that we would apply the same frameworks to a newly conscious entity on the basis of consciousness alone.
Consciousness and the implications of that consciousness can vary drastically, e.g. no one wants to be in the same room when the sleeping grizzly bear wakes up.
> How do you know that a (sufficiently complex) biologically inspired computer program doesn't have it?
I think we will eventually have to take this question seriously, but current systems do not seem to approach the levels of complexity required. But taking this seriously is not at odds with the belief that current AI programs are nowhere close to the level of complexity we associate with conscious creatures.
> What's special about meat?
I think this is the question that many scientists and researchers would love to answer.
There are some lines of thinking that consciousness is an emergent property of a sufficiently complex biological system with a sufficiently complex nexus of computation to make sense of those systems. In this line of thinking, the experiential aspect of consciousness - e.g. "what it's like to feel pain" - is just as critical to the overall experience as the raw computation capabilities in the brain.
Maybe meat isn't special at all, and consciousness springs from some other source or confluence. Even if it does, we then need to have a conversation about whether consciousness is the great equalizer, or if the "kind" of consciousness also plays a role.
Going back to that grizzly bear, no one wants to be there when it wakes up, but neither do we hold the bear to human standards of value. If the bear kills someone, we don't ascribe to it titles like "murderer".
But again, even if biology is not a key component, I still don't believe arguments about consciousness can be used as a basis for the ethics of the current generation of tools, which are far too primitive relatively speaking.