Autonomous vehicles have various redundant systems built-in that can take priority and override false positives.
I was previously under the assumption that one of the really important reasons for Lidar is that it can get you closer to an absolute truth about whether something is a solid object, and where that hypothetically solid object is relative to the position of the vehicle, regardless of what the classifier thinks it is seeing.
So did the lidar fail to read the solid object, or was the lidar, was it de-prioritized? or was it simply not available as a fallback?
Presumably Radar and proximity sensors were also involved. What were they doing?
This is a fascinating edge case, and I hope to hear about the real reason for the 2 incidents.
I understand we have to have explanations or we can't fix them, but it's just as important to understand this should never have happened even WITH the described failure.
If I had to guess, there's code to avoid stopping at every little thing and that code took precedence (otherwise rides would not be enjoyable). And I get the competing interests here but there must be a comparison to humans when these incidents happen.
https://waymo.com/blog/2024/02/voluntary-recall-of-our-previ...
I'd sat through 'five whys' style postmortems before, but it was reading air safety investigation reports that finally got me to understand it and make it a useful part of how we get better at our jobs.
By comparison, the way we're investigating and responding to self-driving safety incidents still seems very primitive. Why is that?
One difference with this situation in terms of the public perception/discussion though is that, say in the 1960s, air safety wasn't very good compared to today, but still there was no question of eliminating air travel altogether due to safety issues. Today there is definitely an anti-self-driving contingent that would like to hype up every accident to get the self driving companies shut down entirely.
In this case two self-driving cars crashed into another road vehicle because they failed to recognise (in time) which direction it was moving. Waymo should be commended for having voluntarily issued a software recall, but this problem is severe enough that the decision shouldn't really be up to Waymo's good judgement.
There is an explicit culture and mechanism of blamelessness around safety concerns and minor violations/deviations, which is incredibly helpful. Read about the ASRS* program (admin'd by NASA, with anonymity for non-intentional issues, prohibition on use of submissions for enforcement purposes, and explicit "get out punishment" card from the FAA): https://asrs.arc.nasa.gov/overview/immunity.html
The FAA is also explicitly including "evidence of voluntary compliance" and "just culture" in its approach to aviation safety, and explicitly changed its goal from enforcement and proof to ensuring compliance [with enforcement being only one available tool]: https://www.faa.gov/about/initiatives/cp (PDF pres: https://www.faa.gov/sites/faa.gov/files/2022-09/The%20Compli... )
I'd also read a bunch of aviation reports: https://www.ntsb.gov/Pages/monthly.aspx (more detailed reports are available approximately 2 years after the occurrence date and more details are available for fatal or air carrier occurrences, so if you don't care which ones to read, filter for those to start).
If you're more video oriented, watch @blancolirio, @NTSBgov, @AirSafetyInstitute, or @pilot-debrief. (I'd skip @ProbableCause-DanGryder.)
For a short summary, there is an intense focus on determining the facts (who, what, when, where, maybe some guesses as to why) and drawing conclusions about primary and contributing causes from there.
* Aviation Safety Reporting System
A simple lidar moving object segmentation, which doesn't even know what it's looking at but can always spit out reasonable path predictions, would probably have saved them.
I think Mobileye is doing something like this, but they release so little data, which is always full of marketing bullshit, that it is hard to know what exactly they are working on.
We're now getting to see where autonomy needs to develop "spider sense": the scene in front of me feels wrong because some element isn't following the expected behavior maybe in ways that can't really be rationalized about, so we'll become much more conservative/defensive when dealing with it.
Each model can potentially predict longer into the future but also has more complexity and things that can go wrong. So you keep track of how well each model is doing (on an object basis) and if one level is failing then you fall back on a stupider one. You might also want to increase caution if your models are not doing well (lower speed and increased safety distance).
if prediction != reality
{mismatch+= whatever the time tick is}
if mismatch > second (or two)
{abort driver assist}But if everything is fine, everything is fine, everything is fine, and then all hell breaks lose? We are not as good at dealing with that.
When I make a dumb mistake, the other drivers rarely learn from it.
At least that is how I (as a non-expert) imagine these models work -- the model has an excellent chance of crashing at every new unique situation it encounters out of a nearly unlimited set of possible situations (which implies a high frequency of encountering new situations).
We also know how to hold individuals accountable for independent accidents. We know we won't get justice when people will inevitably start to get killed by standard corporate greed, incompetence, enshittification.
All, no. Enough to make a difference, easy.
Black Friday in February! In-Store offer only!
$99 Playstation 5 to first 100 customers at each Walmart location!
You can bet there will be a significant increase in people driving badly. edit: Make it Taylor Swift tickets, and you can increase the size of the frenzy.This would be common for a debt recovery or when a city impounded the vehicle where it's taken without cooperation of the owner.
https://external-content.duckduckgo.com/iu/?u=https%3A%2F%2F...
> People worry that ways and times [self-driving cars] are unsafe (separate from overall rates) will be unusual, less-predictable, or involve a novel risk-profile.
In this example, having a secretly cursed vehicle configuration is something that we don't normally think of as a risk-factor from human drivers.
_______
As an exaggerated thought experiment, imagine that autonomous driving achieves a miraculous reduction in overall accident/injury rate down to just 10% of when humans were in charge... However of the accidents that still happen, half are spooky events where every car on the road targets the same victim for no discernible reason.
From the perspective of short-term utilitarianism, an unqualified success, but it's easy to see why it would be a cause of concern that could block adoption.