The story in the linked article about an Uber whipping through an intersection at 38mph next to two stationary lanes, seems sufficiently conclusive to me that their self-driving system is not ruled by a sense of caution.
Here in the UK, we have speed limits, but the rules of the road also call drivers to consider "appropriate speed" - you slow down in situations where you might have to react with very little warning. This ought to be extremely easy for an automated system - it can measure its braking distance with high accuracy, it can measure distances to objects around it with high accuracy, it can determine exactly which areas of the world around it it can't see, that could pose a threat with relatively low warning, so just fucking slow down.
I've long been bearish on full autonomous driving because I consider there to be so many corner cases in real world driving where ad-hoc non-verbal communication is required to solve traffic flow, that the computers would never catch up. Now I wonder if their solution is to just plough through every problem at 98% of the speed limit and then disclaim responsibility.
Set "too tightly", it will also have you slowing for every car approaching a stop sign from a side street.
Cars that randomly slow out of an excess of caution are also a hazard to other road users. Don't believe that? Go drive for a month and set a series of alarms on your phone every 5-10 minutes. Everytime the phone goes off, abruptly slow to half of your prior speed. Do you think you'd make 1,000 miles without causing a road hazard or collision?
I believe that might mean that autonomous vehicles are not yet ready for road testing if that is commonly required by the current state of the art. (I last worked on autonomous vehicles in 1991; ours was entirely rules-based and we tested on public roads in addition to private tracks. Ours was bad enough that the human driver hovered over the red E-shut button and was always paying attention. It was harder work than just driving the damn thing yourself, but we had to test in order to make progress. I'm sure loads has improved since then.)
I also don't think that zero fatalities is a realistic goal nor is it the standard that should unduly inhibit progress. People have been dying in transit on foot, on horseback, on bikes, and in cars. This is a version of the trolley problem. I don't mind and in fact actively prefer a system improvement that allows 100 deaths while saving 500, even if the 100 is entirely disjoint from the counter factual 500.
In this specific case, I believe the autonomous car allowed 1 death that would have also been allowed by a human driver in the same circumstances, so it's a push.