Because we don't know how to design goal functions. Furthermore, how would the AI measure "welfare"? Maybe the way it maximizes welfare is horrifying to us. Look at how easy it is to hack current image recognition neural nets, then imagine a solution to the human welfare problem that is as far from an image of a dog as an image of pink noise is.