this is where one needs to draw a line between morals and ethics. Because while morals are relative, ethics is something inherent to all human beings (ethos = what makes a thing a thing, ethics = what makes humans human). So we don't need a _moral_ AI, but rather an _ethical_ AI. It can, as you point out, be very simple and maybe Asimovs' laws of robotics could be a good starting point.
I've read enough Asimov and stories that play with those laws to know that the "through inaction…" bit of the first law is a severe problem. You either need to concretely and objectively define "harm" to the AI or remove the imperative to act altogether.