None of your examples entail building a mind. Also there's a kind of irony in my example, since as a branch of computer science it is independently funded and seemingly unrelated to mental health.
You have a model of the brain. You do not have a model of the mind. You assume that by simulating a brain with significant details, a simulated mind will emerge. I find that's a big pill to swallow.
Sure, it isn't ethical to experiment w/o consent. But in order to program an AGI, you need first to conjecture an explanation of how the mind works. One may be able to deduce from that explanation some of the ways in which minds can go wrong. Also, assuming consent is given, it will likely be easier (and less physically invasive) to observe the internal state of an artificial person.