The planning software would want to slam on the brakes without predicting that the blob of sensor data in front of you is going to continue moving forward at highway speeds. That motion prediction enables the planning software to know that the space in front of your vehicle will be unoccupied by the time you reach it.
A similar prediction error was the reason Cruise rear ended the bendy bus in SF a while back. It segmented the front and rear halves of the bus as two separate entities rather than a connected one, and mispredicted the motion of the rear half of the bus.
I think we're all on the same page about this part but what's confusing and hilarious is why would the correct answer ever be to drive into an unmoving object?
If they tried to avoid the truck and swerved and hit a different vehicle there would be no confusion here. But the self driving algorithm is effectively committing suicide (Kamikaze). That's novel.
My guess is that the self-driving car was not able to recognize the truck until it was very close and the sudden appearance of the truck is interpreted by the algorithm as if the truck is moving very fast. And the best answer in that case would be to let the truck pass (basically do what the waymo did).
But that means the lidar information about the shape not moving is being deprioritized in favor of the recognized object being calculated to move fast. A situation which could only really occur if a speeding vehicle plowed through a stationary object.
Fantastic solution for avoiding a situation like this -> https://www.youtube.com/watch?v=BbjjlvOxDYk
But a bad solution for avoiding a stationary object.
The brakes don’t respond immediately - you need to be able to detect that a collision is imminent several seconds before it actually occurs.
This means you have to also successfully exclude all the scenarios where you are very close to another car, but a collision is not imminent because the car will be out of the way by the time you get there.
Yes, at some point before impact the Waymo probably figured out that it was about to collide. But not soon enough to do anything about it.
I get that self driving software is difficult. But theres no excuse for this type of accident.
But take an even slightly more complex example: you're on a two lane roadway and the car in the other lane changes into your lane, leaving inadequate stopping distance for you. You brake as hard as you safely can (maybe you have a too-close follower, too), but still there will be a few seconds when you could not, in fact, avert a collision if for some reason the car in front braked.
I have no idea what the legal situation would be: is it their fault if the crash happens within 3 seconds but yours if it happens after you've had time but failed to restablish your needed stopping distance?
Honestly even in the simple one lane case, I doubt you can slam your brakes on the interstate for no reason then expect to avoid any liability for the crash, blaming your follower for following too close.
Driving has a bunch of rules, then an awful lot of common sense and social interaction on top of them to make things actually work.
If the max speed is 35mph, that allows a good braking system to respond by safely stopping from LIDAR info 99% of the time.
It’s why it’s so difficult to do (actually) and the ability to do it well is just as much about the risk appetite of the one responsible as anything else - because knowing if a car is likely to pull out at the light into traffic, or how likely someone is to be hiding in a bush or not is really hard. But that is what humans deal with all the time while driving.
Because no one can actually know the future, and predicting the future is fundamentally risky. And knowing when to hold ‘em, and when to fold ‘em is really more of an AGI type thing.
(It's been quite some years since I worked on vision-based self-driving, so my experience is non-zero but also quite dated.)