Self-driving cars should be able to use sensors that pick up stuff better than human eyes.
I can't believe that radar and LIDAR were completely unable to see this woman until the visual cameras picked her up. This seems like either a serious flaw with the sensors or with software. The driver, despite looking down at a phone, still seemed to react faster than the computer!
I'm not that comfortable with Uber doing self-driving cars. I don't really consider them a true tech company. They are not built on world-class engineering and design. They are largely a company built by getting around legal regulations and getting rid of staff workers. They've done incredible legal work with regulatory environments.
A company like Google, I think they have the talent and the culture to build something really good here. A company like GM would understand the stakes at hand here and would be cautious. Uber just doesn't seem to have the talent, mission or ethics to be in the self-driving car business. It's no surprise they are the first company to kill a pedestrian.
The worst part is, that the car didn't react at all - if it started breaking when the cameras saw her, it would still be better than what happened. But there was no reaction. Nothing from LIDAR, nothing from Radar, nothing from visual recognition - which to me, suggests only one thing: she was picked up by all three sensors, and then Uber's algorithm decided she wasn't an actual obstacle and could be safely ignored. Like a leaflet in the wind, or an immobile traffic sign next to the road. That's far worse than a straight up hardware malfunction.
Tesla is showing a video, clearly speed up, on their site and it looks like autopilot slowed down as they drove by a pedestrians on the side of the road. Around 1:30 https://www.tesla.com/videos/autopilot-self-driving-hardware... it looks like it is very cautious at stops and when there are things roadside, almost too cautious
I tend to agree with you, I’ll even give their culture a tiny bit of slack if they are simply a ride share/taxi company (very tiny, you shouldn’t abuse or harass people ever) and I think it would make a ton of sense for them to partner with self-driving partners but it seems flawed that they made it into an existential issue that they do their own. It’s ironic, ride share has provided a very reasonable option to driving after a bar visit, with measurable results in some places, safety is something they could sell on.
I recall one of my favorite scenes from Fight Club wherein Norton talks about the cost of lawsuit*the likelihood of crash vs the cost of a recall.
I'm all about Tech companies getting into self-driving cars, less excited about car companies with shoddy track records on safety and no history of disruption. There's a reason Cruise Automation (the self driving start up that GM bought) insisted on remaining separate from GM and not being absorbed into GM's culture.
edit: spelling
To anyone interested in this issue, I recommend the following paper.
https://law.vanderbilt.edu/files/archive/212_Corporate-Risk-...
Also, this story about the engineer who investigated the infamous Pinto 'fire after rear end collision' issue, and the decision not to recall. https://www.newyorker.com/magazine/2015/05/04/the-engineers-...
As good as humans implies that over one million people would be killed by robots every year. Some of the families, loved ones and attorneys of these people would be able to make very convincing cases that they would not have died if a human were driving. This is a hard problem to overcome.
What I do know is that Uber, a company with virtually no scruples, is very far down the list of company's that I would trust to develop their own self-driving technology. It makes total sense for them to use self-driving cars to replace human drivers, but I'd feel a lot better about this if they leased it from another company.
Well, the LIDAR doesn't control the car. It's just a sensor. The car is controlled by an AI system that takes information from the sensors and makes decisions based on that information (and according to its training).
So the Lidar might very well have "seen" the woman, but if the car's AI didn't recognise her as a human in the middle of the road, or didn't know that it had to stop before hitting her, then the car wouldn't stop.
This is part of a test... how can you determine that these cars are better than humans if you can't try them? Put more regulation on the tests, sure, but don't stop them all, otherwise, we wouldn't be able to do any of it. Considering that woman didn't have all her focus on the road, already we see that they didn't put the right person there.
How about saving human time that would otherwise be spent on driving?
Self-driving cars don't promise to save time. It's still driving. Their promise -- their only real promise -- is to save lives. And I think saving tens of thousands of American lives every year is a worthwhile goal. Worldwide, more than 1 million people die in car crashes every year. Humans have proven that they can't drive cars well, but we have put up with it because it is convenient.
I know some people will say that they could work on their way to work, but I can't do work in a car. Staring at a laptop in a car gives me motion sickness, along with a lot of other people. And, of course, this would only apply to people who do computer work in the first place.
As someone who has been in serious car crashes and lost friends to them, I for one am all for self-driving cars. It's going to be a revolution, but we shouldn't be testing them on public roads if they are this bad.
There may be solutions for your wasted time driving that not costs more human lives, like public transport. An AI is will never have the intuition of a human brain, so you can try compensating with better sensors and faster reaction time, but if you have worse sensors and reaction time then a human then you do not have a self driving car yet, such toy projects were made by students years ago.
However, the driving footage is from after release of the story. The moon on the 21st had illumination of ~17%. The night of the accident was the day after the new moon with an illumination of 1%. Someone needs to film the same section sometime between April 15 - April 17 to get a more accurate estimate of the light on that night. I expect video will still show much lighter conditions than that shown on the suspiciously dark video, but without the same moon lighting conditions we are comparing apples and oranges.
(accident) http://www.moongiant.com/phase/3/18/2018
(filmed) http://www.moongiant.com/phase/3/21/2018
EDIT: masklinn pointed out the accident occured before the stoplight in the much more lighted area, not later area where the news crew was filming and was commented on in the second video. It was near the stoplight with the parking deck in the background rather than only the long spaced overhead lights. Incredibly misleading video from the dashcam. Maybe good to know for defense of the true conditions, but it probably won't make that much difference. look at the slideshow
Most of the illumination from the driving footage clearly comes from the street lights, not from the moon. In the Uber dashcam footage, the street lights barely illuminate their own foot, some seem to not even reach the ground. The phase of the moon is essentially a non-factor.
Not to mention LIDAR is not affected by external light source. So the phase of the moon is doubly irrelevant.
I understand LIDAR should have handled it, and visible spectrum may not be much different with and without moon lighting. But I would like to see a similar conditions comparison to see exactly how bad the video is compared to actual lighting conditions.
One video pointed out a black splotch over the pedestrian. If you pause and the video you can easily see it. A very unusual artifact to say the least.
EDIT: see masklinn reply -- the accident occured before the stoplight in the much more lighted area, not later area where the news crew was filming and was commented on in the second video. It was near the stoplight with the parking deck in the background rather than only the long spaced overhead lights. Incredibly misleading video from the dashcam. Maybe good to know for defense of the true conditions, but it probably won't make that much difference. look at the slideshow
"So what does the uncompressed footage look like?"
Intern lost it.
If it was, someone should be going to jail as a result, since that's the footage they seemingly handed over to the police as evidence in the investigation.
I think its a given that visual only data is pretty lousy for AV purposes. You have too many issues with it.
I really wanna make excuses for the Uber vehicle here because self driving vehicles will be awesome, but in this particular case it just failed. :(
It doesn't matter if I were to run someone over when it's dark. I would still be accountable.
The discussion about footage removes focus from the actual issue which should be a legal issue, not a technical debugging session.
Edit: Sorry, am not native English speaker. What I meant when I wrote accountable was that I would have to own up to what happened. Not that I would get a sentence if it was an accident.
As long as you weren't drinking or otherwise found to be obviously negligent this is typically not the case.
The interesting part of this to me is that the police are treating this as if it were a human driver and holding the car to that level of responsibility. A human driver would not be charged with anything for a typical accident like this one (hitting a pedestrian in low light conditions outside of a crosswalk on a high speed road).
The court case will happen since this is so high profile, but if an average Joe driving the exact same route home from work had hit her, it likely would not even go that far.
It wasn't low light conditions and it isn't a high speed road. It most definitely could be pushed to court in the case of a human driver.
I live in NYC, where jaywalking is a way of life, and I've adopted two policies over the last few years:
1. Never jaywalk. Just don't do it. Find the appropriate and safe crosswalk, and wait. 2. Never assume it is safe to cross a street, even when you have the right of way. People laugh at me when I look both ways when we have the right of way, but logic is simple: just because I have the right of way does not mean the drivers coming from either side are aware of that or are paying attention.
With the interior video, and the guy looking at his phone, it gets a little trickier. I think most prosecutors would let the guy twist in the wind for a few more months and then let him plead to a misdemeanor. A lot depends on the victim's family.
The problem is that the case is political and prosecutors tend to behave badly when media and publicity is involved. Hopefully the Tempe prosecutor isn't a camera hound.
I thought the whole point was that the average human sucked at driving. If a human driver had LIDAR they would not have crashed into this woman; more precisely, a human driver with LIDAR would have had to have been criminally negligent to have crashed into and then killed this woman.
Of course in the end self driving cars need to fix the bug that caused this to happen.
So far, the police seems to think "no one". I have a big problem with that. Someone should be liable for it, otherwise there will be no accountability when building self-driving systems. Even if we say "the insurance company is liable", that would still be some progress, because then we'd at least know the insurance companies would put much pressure on the self-driving car makers to prove that their systems are safe.
But I'd say it's the car maker that needs to be the most culpable party, because at the end of the day, if there are say broken brakes in a new car, which causes deaths, you don't blame the brake pad maker. You blame the car maker. They are the ones who should make sure everything works and is perfectly safe before selling the product to consumers or putting it on the road.
Yes you do. You blame both. Both have an independent duty to not sell defective products. MacPherson v. Buick Motor Co., 217 N.Y. 382, 111 N.E. 1050 (1916).
I don't know where you live, but in most legal systems I am aware of there are definitely situations where you wouldn't be responsible. If you were not impaired, driving legally and someone suddenly walks in front of your car, why would you be accountable for that?
Time to learn some new favourite german words. The first would be Sichtfahrgebot; you may only drive as fast as you can see. If your stopping distance exceeds your vision, you are driving too fast for conditions. This is doubly obvious with autonomous vehicles.
The other is Betriebsgefahr. It was your decision to drive a multi-ton vehicle with enough power for the electricity needs of a city block. She was walking, you were driving - you introduced the vast vast majority of the risk and it was your choice that is responsible for the lethal injuries. For this mere fact, you bear a significant percentage of the fault, always.
Do remember that on a civil suit culpability is a sliding scale. It's not guilt or not but what percentage of this accident is attributable to you. A driver looking at their phone, with an enhanced sensor safety system (lidar, sonar, nightvision) will get far more blame than a typical driver.
I expect the civil case to be settled quickly and quietly. T
And had already crossed an entire lane, they were struck on the right-hand lane of a two-lanes street.
Not by a long shot. The /r/video thread has a comment with 20k upvotes purporting to demonstrate that the hit was inevitable "doing some basic math": https://www.reddit.com/r/videos/comments/86756p/police_relea...
The first two responses (sorted by "best") respectively blame the safety driver and provide their own anecdote of hitting something at night, you need response #3 (500 comments below if you don't fold subthreads) to see LIDAR mentioned and #4 and #5 to note that the dashcam footage is not representative of real visibility.
"In order for a human driver, or the driver in this car to have avoided this collision by merely hitting the brakes and traveling in a straight line, "as is the reaction when startled by something on the road" there would have needed to be at least another 127.5' of distance between the car and the pedestrian."
Wow, that comment is...bad.
(As an aside, the stopping distance of a car, including reaction time, traveling at 35MPH is about 100 feet.)
However, being that the sensors used are supposedly secret and innovative, along with the programs/models, I don't see the motivation for any AV company for releasing the data
Will it alter the telemetry, too?
If the Tempe police doesn't get forensics experts on this, they're doing their job wrong. Uber is already known as a company that tends to hide and destroy this sort of evidence.
https://www.youtube.com/watch?v=1XOVxSCG8u0
Edit: I guess this video is also referenced in the article, after Brian's.
Ideally /u/ghdana would have posted the camera settings which filmed this, so you could get a better understanding of the situation.
Not that I'm defending UBER on this, as others have said LIDAR and IR in addition to the cameras should have picked the woman up.
That is true, but raising the ISO for proper (well 18% grey anyway.) exposure is something pretty much every retail camera made in the past decade does. I doubt UBER is using some experimental esoteric camera here.
That said, I appreciate a site like Arstechnica doing a great job of dissecting just how badly Uber's tech failed here, legal responsibility or not.
https://azgovernor.gov/governor/news/2016/12/governor-ducey-...
Politics can always be an explanation.
So Uber moved to Arizona, where they could kill more freely.
While we should not absolve Uber, including the driver who was too busy looking at his phone to intervene, I do wonder if a standard driver would have also been deemed not at fault by the police. The answer is likely, "yes".
But it seems, we do not for sure, that the sensors completely failed and on top of that the human driver also failed, so the question is :are this tests safe? next time the failure could happen on a crosswalk , IMO the question of who is to blame is not as important then the question if is safe to do this tests on public roads with such poor hardware and software.
Add in the characterization of the victim that has been going on. If the car had hit an ASU student it would have been assumed they were drunk/on the phone. If it had been a Mormon missionary (not uncommon in AZ) there would be a lot more focus on the car/driver. Instead, they hit a homeless/low value person. The discourse reflects that.
I've yet to get any information on one thing though, are safety drivers operating under the assumption that the car works at SAE2 or at SAE3? Because if it's the latter, the driver has no cause to keep looking at their phone. If it's the former, the car should have a deadman's switch to ensure the driver stay alert.
An other thing that is not clear is whether they were looking at their phone or at instrumentation (e.g. telemetry or the like).
"But Officer, they came out of nowhere!"
never really holds up too well for you. It's the same as saying "I flat out didn't see them, so not my fault." It's quite the opposite in fact... and any cool-headed person will pick that up. I'm sure the investigation is well underway in any event.
> Speed also affects your safety even when you are driving at the speed limit but too fast for road conditions, such as during bad weather, when a road is under repair, or in an area at night that isn’t well lit.
This vehicle was speeding. Uber is at fault. How is this disputable?
Edit: One More link.
https://www.123driving.com/dmv/drivers-handbook-speed-limits
> You will need to drive with extra care at night. You cannot see as far ahead or to the side, and glare from oncoming cars can reduce your vision even more. Follow these guidelines for driving at night:
Use your headlights (low beam or high beam) between the hours of sunset and sunrise.
Low beam headlamps are only effective for speeds up to 20-25 MPH. You must use special care when driving faster than these speeds, since you are unable to detect pedestrians, bicyclists and others.
High beam headlights can reveal objects up to a distance of at least 450 feet and are most effective for speeds faster than 25 MPH.
Don't use high-beam headlights within 500 feet of oncoming vehicles.
If you are behind other vehicles, use low beams when you are within 300 feet of the vehicle ahead.
When leaving a brightly lit place, drive slowly until your eyes adjust to the darkness.
If a vehicle comes toward you with high beams, flash your lights to high beam and back to low beam once.
Don't look directly at oncoming headlights. Instead, watch the right edge of your lane. Look quickly to be sure of the other vehicle's position every few seconds.
Drive as far to the right as you can if a vehicle with one light comes toward you.Remember Google had to deal with bike corner cases a couple of years ago.
"A Cyclist's Track Stand Befuddled One of Google's Self-Driving Cars"
https://gizmodo.com/a-cyclists-track-stand-totally-befuddled...
I will 100% bet that the police had to interact with/rely on Uber to provide them with the video and data. With Uber's history, and the importance of this to the company, can we trust they didn't manipulate the video (i.e., make it darker)?. In the end, as much distaste as I have for Uber, this isn't even about them...its about process and chain of evidence.
In a non-autonomous accident if there was a fight between two opposing drivers, the vehicle manufacturer is a neutral party and it seems reasonable to rely on them to extract and hand over data. If, however, the fight was between the driver and the manufacturer (e.g., the driver asserted a failure of the vehicle) there is no way that the manufacturer would be allowed to extract the vehicles' data logger. It would be done by an independent third party.
As these accidents happen (and no matter your perspective on this one/cars/autonomous vs. human drivers/etc. - they will happen) there needs to be methodology for extracting data and assessing it from the vehicles that does not rely on the manufacturer or any other conflict of interest party.
Further, this hole debacle shows how important it is that experts be involved in these decisions, discussions. The police chief has no knowledge or experience with the technical details of autonomous vehicles. His statements have, from the beginning, been irresponsible and inappropriately deferential to Uber at the expense of a citizen of his town.
The rush for LIDAR/Radar technology makes clear just how much of autonomous tech relies on the nonvisual range. However, the chief obviously...based on his statements...is judging performance entirely on the visual range. He is not an expert. As an ex-Tempe resident, I doubt Tempe has independent autonomous vehicle experts. I would almost guarantee they are relying on the companies being truthful and open with them.
That is bad.
For the video released so far, not necessarily. It seems to be from an off-the-shelf dashcam, which the police most probably not only know how to download, but also has already done so several times in other accidents. For the rest of the data (and the video from the self-driving cameras, if it's stored instead of just used in the control loop and then discarded), I agree with you that the police (and the NTSB) will probably need the help of the Uber engineers.
1. We need to have better suited (much better low light performance) frontal cameras equipped for self-driving cars and dashcams. Be it a dual sensor/lens setup or whatnot, if Apple can make it for iPhone X for $35 [1], the technology is available now and shippable now. Compare to the cost of tens of thousands of the price of a car and hundreds/thousands for the price of packages/options manufacturers to add on to a new car, that's virtually nothing.
2. We need better more sophisticated headlamps and new laws to better suited for today's technology advancements[2] - something like a 3rd mode of wide beam on top of hi/low bean modes. From ubiquitous adaptive headlights to matrix laser beams, these improvement can increase driver visibility and perhaps save lives in situations like this.
[1] http://www.businessinsider.com/iphone-x-teardown-parts-cost-...
[2] https://blog.dupontregistry.com/mercedes-benz/why-are-adapti...
Jaywalking or not, the victim was clearly a pedestrian (not riding her bike), the insurance of Uber (Uber itself?) should compensate the victim's relatives. If it doesn't I expect a huge backslash.
* not tone-mapped, I mean what HDR truly stands for
I'm with the majority here -- the LIDAR should have picked the pedestrian up and decelerated. Uber's self-driving tech is less safe than a human driver, and should be shelved.
Dead pedestrian here and there won't hurt the companies financially. There's not much reason for them to push for perfect products. They can just shoot for 99% and account for the rest in damages. Disabling their service will hurt them.
This needs to be treated like planes, trains, or medical devices are. Not like consumer device.
The guy here was clearly distracted. He only looks up at the road a fraction of a second before the crash.
My bet is that he'll be prosecuted as if he were a distracted driver: he's responsible for the safe operation of the vehicle (otherwise, why's he there in the first place?). He'll go to prison, probably, and rules will change very quickly as a result.
Driving a car is an active process, it's fatiguing but not inherently boring (excepting really long, straight, empty roads through unchanging scenery). Sitting in a SDC while it drives itself however is boring: there's a well-researched and well-understood attention deficit problem which nobody seems to be discussing here.
It's the same thing that makes TSA security screening such a tough thing to get right, or sentry duty: you can't expect humans to sit for hours passively monitoring for unpredictable and rare events that they then have to react decisively to. Brains don't work like that. These safety drivers need short shifts with frequent breaks, they need a partner, and they need an active background task that keeps their eyes up and forward and their brain engaged (giving a commentary, for example).
I just hope engineers working on software that can cause serious injury or kill someone do the appropriate diligences to ensure this doesn't happen again.
Could it be that the police//Uber employees//municipality have a political goal to make this look like it was unavoidable? It looks so to me.
Driving at night or in a snow storm is a very difference environment than on a nice day.
At the same time though, the person with the bike had no reflective markings, wasn’t blind or anything, and was crossing the road without any respect for the incoming traffic at an unmarked spot. They didn’t even flinch when the car was about to hit them. The guy in the car looked away for like 5 seconds. He clearly wasn’t absent-minded. If the visibility wasn’t that bad, any suspicious activity on the left would have grabbed his attention.
Not knowing the law as it applies for (semi-)autonomous vehicle testing in Arizona, I am very curious how this one turns out.
Based on my experience driving at night (In Norway, where it also gets pretty dark), even in rural areas with no street lights, you tend to discover pedestrians -even the ones not wearing safety reflectors- significantly more than a second and a half before you cross paths with them.
So - while the vehicle is driving autonomously, there's still a driver (or whatever we should call it, seeing as they do not drive as such) behind the wheel.
If the driver had paid attention to the road, rather than her cell phone, this accident likely wouldn't have happened.
So - who is to blame, legally speaking? The 'driver' for not paying attention? The coder implementing the control algorithms? The HW engineering team deciding which sensors to use? Someone else?
This will be a lot simpler when vehicles are 'properly' autonomous - but right now, it seems AVs simply give the 'driver' all sorts of incentive for not paying attention, while still not absolving them of responsibility. That's the worst of both worlds.
The released video looks like the latter, though it could also just be standalone dashcam footage. It is also pretty mangled by compression, especially the dark parts are just completely blocky, so it is probably not a good indicator of what the vehicle saw.
OTOH, bikes can be notoriously hard on image recognition, for example it is not unheard of that reflective rims are interpreted as continuation of a curb or lane marking. LIDAR and RADAR should see them, though, and they probably did, so it is indeed unclear where it all went wrong.
The question is indeed why the electronics failed to detect her, but there's something really wrong with car lights like that.
I think there is a question of why didn't the optical cameras detect her. It doesn't seem poorly lit.
I think there are serious questions about why the Uber car didn't detect the victim, given all the sensors it was supposed to be using, but "came from the shadows" is an accurate description based on the Google Street View and overhead views of the crash site.
It helps to read the entire article before commenting.
However the article goes on to explain that dip'd beam headlights don't dramatically increase your vision.
However you are correct its apples to oranges. The Dashcam sensor is had not adjusted correctly to the light conditions, or it isn't sensitive enough. However thats not entirely relevant, as the dashcam doesn't (or at least I hope it doesn't) form part of the sensor array.