> To be fair, you have to keep in mind that deploying 1000 cars would quickly make self-driving cars safer than humans.
This seems unreasonably optimistic.
First, this particular crash is an egregious counter-example. The car doesn't even seem to slow down when it first sees the pedestrian's foot. Nor does it try to swerve. This is basic stuff for a human driver, never mind more complex avoidance and risk mitigation a human driver can perform.
Second, we've had years of training various AI content curation algorithms on social networks, videos, blogs, etc - and the most advanced AI and search company in the world still can't keep adult-oriented conspiracy videos off of Youtube Kids. And while you might counter that content is a human problem, driving is too! Dangerous driving situations happen at the periphery of traffic rules, where someone is doing something the drivers around him don't expect. I've seen people run solid red lights, drive the wrong way down a one-way street, pedestrians start crossing when their light turns red, etc. Short of creating special roads for autonomous cars only - how does an autonomous car deal with all of this successfully?