I feel that it covers an awful lot of them. If you cap teleop driving at 20km/h or something (or maybe a dynamic cap based on your rtt), that still covers all of the parking lot scenarios, as well many sensor-failure situations, like if you needed to crawl along in the right hand lane because it's a blizzard and the radar is blind.
In any case, the Forbes article specifically addresses how they modeled these things:
"Up ahead a deer jumps into the truck’s lane and hundreds of miles away a teleoperator is asked to take control of the vehicle. But they aren’t able to in time – either the deer jumped too quickly or the teleoperator wasn’t able to get situationally aware or worse yet: the cellular connectivity isn’t good enough!
Such was the situation painted to me time after time after time as CEO of Starsky Robotics, whose remote-assisted autonomous trucks were supposed to face exactly such a scenario. And yet, it was an entirely false scenario.
As I’ve written about before, safety doesn’t mean that everything always works perfectly, in fact it’s quite the opposite. To make a system safe is to intimately understand where, when, and how it will break and making sure that those failures are acceptable."
The fleet argument also confuses me; hasn't that been the Waymo/Uber pitch since forever, a centrally owned and managed fleet of autonomous vehicles for hire? Why would that be considered an especially risky direction?