Humans have incentives to not do those things. Family. Jail. Money. Food. Bonuses. Etc.
If we could align an AI with incentives in the same way we can a person then youd have a point.
So far alignment research is hitting dead ends no matter what fake incentives we try to feed an AI.