I would strongly argue that even being able to prove that a human might perform worse is not an acceptable excuse for the reasons I will outline. The bar for a computer needs to be significally higher than that of a human.
We know that humans can make mistakes, due to a multitude of reasons. They can be tired, moody, distracted, stressed out, time-pressured, simply not care enough, etc, all contributing to making the wrong call. But a computer does not suffer from such issues. Secondly, a computer (program) is able to perform billions and billions of computations within some time period in order to ENSURE that doing this thing with grave consequences is absolutely warranted.
Maybe for some domains we can tolerate errors from AI, but when deciding whether a person (and everyone around them) lives or dies, surely simply being on average even more accurate than a human is not enough. "Killbots" MUST be extremely heavily regulated.