I know! We'll just add a program that checks whether or not the AI would ever stop and come to a decision.
Back to the article, they point out impatient overrides by the test driver are apparently not being counted as disengagements and do not appear on the reports to the DMV. The author tries to make this into a safety concern, but it just isn't. Maybe it is against the intent of the code as written, but maybe not, since it seems to be concerned with true safety violations.
The code: https://www.dmv.ca.gov/portal/wcm/connect/d48f347b-8815-458e...
It's not impatient. It's the car having no idea what to do next, so it just sits there waiting for something to happen.
I count that as a failure in the path to autonomous driving. Remember: Those 1% scenarios are the hardest part, yet they are critical. You can't just say "99%" it's good - after all humans drive perfectly 99.999% of the time (measured by time on the road vs assuming 5 minutes per accident - which is probably high, since the error leading to the accident probably took less time than that), yet that's not good enough.
For any kind of self-driving vehicle, making a silly mistake and needing a human to override it to prevent a collision is not OK at any significant frequency. I'd guess the maximum acceptable rate would be no more than one collision in multiple years of driving.
For a self-driving private vehicle with manual controls, pulling over and saying "help me, human!" once every few months is perfectly acceptable. For a standalone self-driving taxi with no manual controls, it's not. For a self-driving fleet taxi with a remote operator able to take over and help it when it gets stuck, it may be.
I’m mostly thinking of when we take the human out of the equation completely, if the car is stopped indefinitely that is going to cause all sorts of problems. If it’s blocking a city street you’re creating traffic as well as potentially slowing down first responders. Being stopped on a highway is very much a safety hazard.
Even with a human driver to take control there are places where coming to a stop would be a very bad thing (an on or off ramp for example).
Your comment is true, but consider the context. There was a vehicle already blocking the lane that caused the autonomous car to stop. The autonomous car did not block an unobstructed lane, the lane was already blocked.
I don't think these are the only two possibilities. A third might be that they're confident that if a car finds a situation it can't handle, it'll gracefully degrade. If 1 out of 10,000 rides ends in a crash, that's a problem. If 1 out of 10,000 rides ends in needing to call a human driver for a pickup, it's not.
A bazillion miles on the same track isn't necessarily as impressive as a magnitude fewer miles on varied uncontrolled/unmapped snowy, icy, narrow, unlit, flooded, unplanned etc, routes.
So, how do I compare Google's/Waymo's vast number of sensor rich planned miles versus Tesla's much lower, much less rich (no lidar) but much more diverse and real-world miles? And everyone else's (Uber/Lyft) in-between play?
It's pretty unclear, for a layperson, right now, who is ahead of the pack.
We can make an analogy to Starcraft AI-assisted here. What if instead of having a fully automated AI, instead you could train your car to take you home, by driving your route a couple of times? Kind of like how you might train the computer to go harass your opponents expansion and then just call up that subroutine everytime that it’s apropos (still waiting on Blizzard to make this game).
Don’t try so hard to replace the human. Make the human less busy, more powerful.
Waymo was doing partial autonomy pretty well back in 2012, before self driving cars were even a twinkle in Elon's eye, and before anyone had ever taken seriously the notion of utilizing deep learning in autonomous vehicles. The real challenge is in fully validating the safety and reliability of these systems to the extent that a commercial robotaxi service becomes feasible.
I recall having conversations back in 2012 wondering *what if these things get pretty good, reliable enough that they can be counted on to be safe, but still subject to getting hung up or confused in any of the myriad situations drivers can get into where some sense of contextual awareness, creativity, and higher level reasoning is required, and there were big debates about whether Remote control, or remote guidance would be viable as a solution for tricky situations. It turns out that Both Waymo and GM are doing this. I'm not sure yet what the ratio of remote monitors to operational vehicles is expected to be for initial pilot deployments. It could be 1:100, or 1:10, or something else.
What's fascinating is that Analysts have estimated about $100 billion (not counting China) have been invested in the emerging self driving car industrial ecosystem that includes software development, chips, sensors, fleet management service, mapping, logistics and everything else. I don't think there's an historical analogue for something like that, where so much effort and value has been placed in an as-of-yet unproven technology.
For me the interesting thing is that it's all playing out according to a script (it's needed some revision, but not much) I wrote as thought experiment in 2010 just by asking myself "Well, what would happen if this technology actually came to fruition?" I've been following the development of Autonomous vehicles since the Darpa days, but at that time I never took it very seriously as something that might actually work in the real world, it was just a neat science experiment.
I've worked in software since the '90s, I've seen how the sausage is made at all kinds of companies big and small. The theory of autonomous cars is a bunch of really smart folks hand crafting history's finest software. And there may be some cases where that won't be too far from the truth. But the reality on the street is going to be a zillion different competitors cutting every corner, skirting ever regulation they can get away with, and just shitting out the worst "move fast and break things" hackathon bullshit code that "sort of seems to work, most of the time" that they can manage. I know how devs and product managers think about testing and quality in the absence of dedicated and rigorous QA standards and infrastructure, in the context of life critical systems that is frightening.
Ask yourself, do you want to put your life in the hands of a code base that had some pimple faced learned-to-code-in-10-days bootcamp graduate who just "fixed a bug" in the drive software by ctrl-c-ctrl-v'ing from stackoverflow and then pushed to master? Because that is going to be the reality, not the ivory tower "well, if they did it the RIGHT way" fantasy that people have in their heads. The only way we'll get the "right" way of autonomous software development is if there is extensive and careful regulation with very rigorous auditing and process requirements. And we are nowhere near that right now.
So they are not cutting corners. I'll be more concerned about the companies playing catch up, or the cheap clone that will try to cut on the costly sensor like Lidar and on the long and rigorous process required to develop it correctly. .
What makes you think the NHTSA has the expertise or resources to carry this out? The proof will be in the empirical results (accidents/fatalities per mile), not super duper code audits.
From what I've seen that doesn't really make a difference either. Companies will follow the regulation on paper, but not in spirit. That's already happening in other heavily regulated areas of software development. My experience has actually been that heavy regulation makes for worse software, because the company starts spending more time on lawyers and "quality engineers" driving up meaningless metrics than maintainable clean code.
And if someone doubts that, remember that's what always happened in computing. That's how Lisp Machines and Smalltalk systems lost to UNIX. That's how we got C and JavaScript. Worse is better. The rule of computing under competitive, market-driven environments.
I dread to see it applied to life-critical systems.
Humans are terrible drivers. A half crap autonomous car might still be safer than the status quo. In any case, whether talking about consumer goods or services, this is a space markets work in. Calling for rules to be written before we fully understand the problem is a recipe for overregulation.
[0] Can we please call them "auto-autos"?
Perhaps some sort of compulsory licensing would be better.