Asimov's Three Laws aren't a replacement for the whole system of human values. His robot stories are full of robots doing strange things that conflict with human intuition but are aligned with the Three Laws. Asimov himself recognized that the laws were not sufficient, and added the Zeroth Law as a workaround:
"A robot may not harm humanity, or, by inaction, allow humanity to come to harm."
But this law is vague and difficult to interpret. The Laws of Robotics are not a formal specification of ethical behavior. From the point of view of a fiction author this is no problem. The ambiguities allow for more exciting stories. But when real human lives are at stake, it's a serious problem. "Friendly AI" is a non-starter if we can't even define what "friendly" means.