Really it’s a common difficulty with utilitarianism. Tesla says “we will kill a small number of people with our self driving beta, but it is impossible to develop a self driving car without killing a few people in accidents, because cars crash, and overall the program will save a much larger number of lives than the number lost.”
And then it comes out that the true statement is “it is slightly more expensive to develop a self driving car without killing a few people in accidents” and the moral calculus tilts a bit