NHTSA, which, after all, studies crashes, is being very realistic.
Here's the "we're looking at you, Tesla" moment:
"Guidance for Lower Levels of Automated Vehicle Systems"
"Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over- relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997). SAE Level 2 systems differ from HAV systems in that the driver is expected to remain continuously involved in the driving task, primarily to monitor appropriate operation of the system and to take over immediate control when necessary, with or without warning from the system. However, like HAV systems, SAE Level 2 systems perform sustained longitudinal and lateral control simultaneously within their intended design domain. Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public. And, systems’ expectations of drivers and those drivers’ actual understanding of the critical importance of their “supervisory” role may be materially different."
There's more clarity here on levels of automation. For NHTSA Level 1 (typically auto-brake only) and 2 (auto-brake and lane keeping) vehicles, the driver is responsible, and the vehicle manufacturer is responsible for keeping the driver actively involved. For NHTSA Level 3 (Google's current state), 4 (auto driving under almost all conditions) and 5 (no manual controls at all), the vehicle manufacturer is responsible and the driver is not required to pay constant attention. NHTSA is making a big distinction between 1-2 and 3-5.
This is a major policy decision. Automatic driving will not be reached incrementally. Either the vehicle enforces hands-on-wheel and paying attention, or the automation has to be good enough that the driver doesn't have to pay attention at all. There's a bright line now between manual and automatic. NHTSA gets it.
So the reason it was a big deal is because it was a huge fatality. Tesla drivers are generally a pretty safe bunch. Statistically, if autopilot hadn't been engaged, that death would not have occurred. Autopilot makes Tesla drivers less safe, not more safe.
Also, the government is doing self driving industry a huge favor. These fatalities could screw over the whole industry if they get out of hand. Musk is giving self driving a bad name.
It's the offence that an engineer feels about something being marketed as something it's not.
Tesla is fooling the public. The opinion of the general public who don't drive Tesla's cars is that automated driving is already here and Tesla is leading the way.
[0] You can say they tell you to keep your hands on the wheel and all that, but they themselves manufactured/fanned a ton of hype to the contrary. It's like arguing that you should have paid more attention to the EULA.
he's definitely anti "disguising level 2 as autonomy" though.
A likely future is one where automation is only enabled for consumers as an option on a minority of roads (starting with the Interstate Highway System) that have been heavily mapped and managed, and we work from there, developing the algorithms at high sample size, then slowly extending out into the state highways and arterials. The roads and maintenance actions will likely also, as the tech progresses, have some modifications made to increase reliability.
These cars are going to need a large quantity of sensors; The Uber self-driving car has "something like 20 cameras, a 360 degree radar, and a bunch [7] of laser [rangefinders]", and this is a decent start; a Tesla and even a Google car is simply not equipped for enough edge cases to let a consumer near without making them hands-on-wheel liable to take over.
People can work around any system for this, but stuff like this makes it have to do it consciously. Seems pretty reasonable on Tesla's part.
I think there may be a fundamental flaw with lane keeping. It removes the driver from doing anything but still requires constant vigilance. That might be asking too much. My ADD is too strong to wach the road without having to do any part of the driving. I suspect a lot of people are the same way.
If most drivers are just keeping their hand on the wheel while day dreaming, Telsa should be forced to just disable the feature until the tech is ready for Level 3. Or use the Level 2 tech as a backup only.
I.e. a loud beeping noise that annoyed pedestrians and other drivers until you took the wheel. Kind of like how accidentally triggering your car alarm in the parking lot will lead to a very hasty correction on your part.
An eye-tracking system might work.
You get positive points for avoiding situations that are noticed (but not within the audible warning threshold) or correctly reacting to input (warned but not yet in automated 'fail safe').
Now if only they could automatically make cars exiting a rolling slowdown on the freeway actually get back up to the indicated speed of travel in an expedient manor.
Not true.. There are other ways of doing this incrementally. For example, slow speeds, closed roads (no pedestrians or other cars), only in good weather, ideal conditions, etc.
We can't have an autonomous car that expects a driver to take over in a dangerous situation if that driver hasn't had to maintain control the entire time. For instance, there are youtube videos of drivers moving to the passenger seat in a Tesla with autopilot on.
Yes, and they're all undesirable, unworkable, or useless, as your own post points out.
Nobody is above level 2.
Google's self driving system only basically works with the route preplanned and premapped ahead of time, specifically for that car. Even small changes in the environment are potentially devastating. And even mundane weather changes it isn't prepared to handle.
It should be well understood that if the only people who can safely handle the vehicle are professional test drivers on a preplanned route, the car isn't ready to say its at the level it claims it is.
• Data Recording and Sharing
• Privacy
• System Safety
• Vehicle Cybersecurity
• Human Machine Interface
• Crashworthiness
• Consumer Education and Training
• Registration and Certi cation
• Post-Crash Behavior
• Federal, State and Local Laws
• Ethical Considerations
• Operational Design Domain (operating in rain, etc)
• Object and Event Detection and Response
• Fall Back (Minimal Risk Condition)
• Validation Methods
Not sure if they're specifically ordered, but it seems positive that Data recording and Privacy are up at the top.has this ever happened before?
[1] And I don't mean wrong as in "NSA spying" because you disagree with the policy. I mean like, "regulations mandated everyone use Beta tapes and laser disk even though they quickly became obsolete."
This is of course once almost all cars are self-driving so it'll be interesting to see what happens in the midterm.
The data collection "black box" side of it is in a different section.
By the way, in which area do the following requests fall:
- Yielding to an emergency vehicle with sirens on.
- Moving backwards to a safe and large enough spot when the route is too narrow to fit self-driving car and oncoming huge lorry (and there is no line marking the limit between road and ravine).
- Upon instructions from authority, recognize that the highway is closed due to an accident and, no matter what the driving code says, you actually have to make a U-turn on the highway and follow the crowd. Alternatively, just take that route (yes, the one with the large no-entry sign at the beginning) or that narrow path in the wood (yes, it exists, even if Google Maps isn't aware of it). At the bare minimum, park yourself off the road and let the others move on.
- Verify whether a queue is forming behind you. Listen to the honkers, they may be right. When you are an obstacle to the most part of traffic, moving to the side and letting others pass from time to time is sincerely appreciated.
Was that hyperbole? I would say the majority of regulations (at least in OECD countries) are sensible, and many that are not are intended to be, are outdated, or are politicized.
> I would say the majority of regulations (at least in
> OECD countries) are sensible
I think it can be shocking to non-Americans just how much the Americans distrust and think their lawmakers and -- especially shockingly, their civil servants -- are both incompetent and have malicious intent.American friends have found it incredible -- for example -- that something like NICE[0] can exist and people don't assume it's trying to kill them all; cf "death panels".
I also wonder in what other developed countries Jade Helm 15 would have been controversial[1]...
[0] https://en.wikipedia.org/wiki/National_Institute_for_Health_... -- especially their guidance on how much a year of life is "worth"; see the "Cost Effectiveness section
[1] https://en.wikipedia.org/wiki/Jade_Helm_15_conspiracy_theori...
Gun control is a great example that seems to confuse a lot of non-Americans. To your average San Franciscan, who has never used a gun and has no particular reason to use one, restrictions on e.g. magazine size probably seem quite reasonable. But go to an agrarian Texan rancher, and the situation is entirely different. Good luck thinning out a stampeding herd of wild hogs with a ten round fixed magazine. Similar situation with pot; the average SF resident is probably fairly familiar with it, whereas the rancher probably isn't. In either case, ignorance breeds irrational fear, which is a bad (but unfortunately likely) foundation for laws.
So yes, many regulations are not sensible, and it's harder to get away with in the US because the US isn't a monoculture. Even those regulations that are sensible (by whatever metric you like) are likely to anger some non-negligible group.
But regulations that are computer-focused? Less so.
Hopefully, car companies will deal with reduced demand by going upscale with more fancy cars for a smaller market.
Of course, someone needs to build all of those auto-taxis. They are going to be do very very well for themselves.
Why do you say that? I have no opinion either way, just curious
Really, it should be international.
The report recommends that "Manufacturers and other entities should develop tests and verification methods...". Does anyone know whether verification here means software verification, or does it mean something else in this context?
Edit: Just noticed that I got to the PDF via elicash's comment and not via the linked article. Here's a link to the PDF: https://www.transportation.gov/sites/dot.gov/files/docs/AV%2...
In this context, they mean verification and validation in the systems engineering sense. Software would be included in that it is a part of the whole system.
On one hand, at the low level, sensor, motor control, etc you likely have traditional hard real time/MISRA C code, but on the higher layers you probably things like DNN, image recognition, which are much less deterministic.
So I am not sure how do you reconcile these two worlds, and prove it is safe and always work in timely manner.
It seems the only sound approach would be to validate the whole system on a real road.
edit: as to SAE Level 2, it has this (and more) to say:
> Furthermore, manufacturers and other entities should place significant emphasis on assessing the risk of driver complacency and misuse of Level 2 systems, and develop effective countermeasures to assist drivers in properly using the system as the manufacturer expects. Complacency has been defined as, “... [when an operator] over-relies on and excessively trusts the automation, and subsequently fails to exercise his or her vigilance and/or supervisory duties” (Parasuraman, 1997).
also,
> Manufacturers and other entities should assume that the technical distinction between the levels of automation (e.g., between Level 2 and Level 3) may not be clear to all users or to the general public.
Two examples are:
1) If the vehicle is talking to the cars in front of it, it can know they are braking before it senses that visually. Also, the vehicles can speed up in a gridlock scenario more in unison, like a train.
2) On the interstate, markers in the pavement can be specifically designed for computer sensors rather than human eyeballs. Also, cars can draft together to save fuel.
Hackers will easily figure out a way to spoof the communication, and could play with traffic.
There are mitigations for most issues, but it's a complex topic.
Just imagine some scenarios:
-) Spoof an emergency break advisory that causes tailing cars to also do an emergency break. (could be mitigated by first observing that cars in front are actually slowing down before breaking)
-) Spoof a command from a smart traffic light at an intersection to stop immediately for police / other emergency traffic. (need to check if traffic light is actually red)
-) Spoof speed restrictions issued by a smart highway traffic jam prevention system.
-) A system for police to force a car to stop immediately and pull over, eliminating car chases. Just spoof this signal and stop anyone you want. (mitigate by checking if there is a police car trailing you, and ignore otherwise).
And so on...
A way around would be to maintain a national database with public keys for each registered vehicle, and make cars only accept those keys. But that would be hard to maintain and still hackers could just get a hold of some PK.
In the end, the driving system will always have to correlate such car 2 car communication with observations it makes itself.
And an autonomous system can react almost immediately anyway. So coordination doesen't give you all that much.
-- There are some useful ideas though, like:
-) Traffic lights can announce an ideal speed for a route, taking into account traffic and traffic light timings, so you can optimize throughput and minimize fuel consumption
It's far far easier and quicker to throw a brick off a highway bridge but that surprisingly happens very infrequently.
We were working on diagnostic and emissions checking standards but there was the expectation that we would be able to make use of secure network links to cars at some point in the future.
The question at the time was which would come first. Would a requirement to do emissions testing under real-world conditions push the introduction of radio networks that could also be used for cars to talk to each other or would the road-train type applications be the initial use case.
I'm also hoping that one of the options is to upgrade an old car to a self driving car with an open source kit that you can buy and install it via a certified mechanic.
I think that would be an interesting future I'd like to be part of.
This is a big deal.
I'm not sure I would put much weight behind what he has to say.
It looks like consumers and automakers are both wanting driverless cars so putting any enevitable regulations quickly benefit both parties.
The reason I ask is there are plenty of other countries in the world where cars just aren't that important, let's take Netherlands for example. If you have automatic cars, society here is not just going to be that excited AFAIK. Public transport here is great and most people cycle everywhere, because it's fun, easy and good exercise. Not to mention a lot of people are employed as drivers.
Same for many Asian countries where population density is high, people just don't have the money/room for cars. Scooters are the way to go because of traffic congestion.
Besides, don't people enjoy driving? I don't own a car but when I get behind the wheel, it's a lot of fun. Will people really be able to handle the car doing the speed limit?
I understand technologically it's pretty interesting, but we've had commercial airliners that fly themselves (mostly) for a long time, same for ships and drones and we don't marvel over those things all the time, though I agree they are great innovations.
So apart from the tech what is the actual excitement about?
- Concern for those who will lose their jobs.
- Concern for others safety.
- Privacy concerns.
- Excitement about the safety benefits.
- Economic opportunity.
- Fundraising hype.
- All of the above?
As a Silicon Valley outsider sometimes I read HN and it feels like some context is missing. Sure it's going to change industries, but is this really good progress, necessary progress, or just the next thing we're told we need? I mean can a self-driving car really replace a delivery person yet, a person who can do things like leave packages with a neighbor and build relationships, trust etc?
Sorry if this is a little off-topic, but I'm genuinely curious because it's hard to understand, to me as an outsider, it really looks like some kind of ride-sharing turf war hype battle more than anything else?
I dare to say it, but it's the same for machine learning, a lot of it is fascinating, interesting, exiting tech, but how many product recommendations does one need? How good do my friend recommendations have to be? How smart does Siri need to be? Will a patient really feel better without being treated by a human? Are we really going to trust these things handling nuclear warheads?
Maybe I live a strange life and have unusual views, but I just don't really see the need most of these things when so many problems could be solved using other means. Using this stuff to help people is great, but how much of this effort is actually being put towards that end?
If I'm a little naive, apologies. I'm not having a go but these are just honest questions I often find myself asking when reading HN lately. Agreed this might not be the place to ask but I'm prepared to wear the down votes :)
Now imagine the scenario for most of the US, a public-transport-hostile country for the most part, where millions upon millions of people burn their precious lives waiting in traffic and sucking in traffic fumes. In my mind, this is one of the most appalling wastes of human potential that has ever existed. Sure, some try to make lemonade out of lemons by educating/informing themselves as they see fit but by and large, it is a huge waste. Not to mention the many thousands of people who die every year in car accidents during the daily commute.
So from my point of view, the self driving car is a thrilling concept: the ability to disengage from a useless, pointless, and hopeless daily grind and engage in something that I want to do, whether it be work, reading, watching a movie, etc. is cool. The closest I have come to this dream in my transit-unfriendly Texas city is the one job where I had an opportunity to take the train/bus into downtown: while this made my daily commute very long, I loved it because it freed me up from the drudge of driving.
Some might ask that perhaps I just hate driving. That is not true. I love taking road trips or autocrossing when I can. But to equate the daily commute with enjoyment is a bridge too far, in my opinion. Banish it, I say, banish it.
How will there still be no traffic jams, or the car will be like an office? In that case why not just work from home and come to work for meetings here and there? Might flexible / less work hours help?
I mean people will still be driving around in vehicle which often makes people motion sick if not paying attention to surroundings , cars requires a lot of energy, take up space etc.
I used to travel to work via train, it was 1.5 hours one way, it was highly productive time for me, but for some reason trains don't seem to make people as motion sick?
I guess one other thing to note is that in Australia, where I'm from originally, some see people think of others using public transport or biking as kind of peasants or feel it's inferior, that might be part of it too ? They're also the kind of people who often like to drive fast and own expensive cars as a status thing, so I'm still note sure it's going to take?
Also we all know the security on cars is weak, who needs a bomb in a terrorist attack when you can just hack something and order 50,000 cars to crash (that might sound silly, but it hackers find ways). Do you trust auto manufacturers enough to secure it? Yeah we can mail firmware updates on USB sticks, what could go wrong.
Sorry I am pretty sure I am one of those hostile drivers. They should market it as like a designated driver for the drunk. It would sell faster then.
A sibling comment pointed out the loss of life issue. I recall (correctly, I hope) Sebastian Thrun mentioning that a traffic accident was part of his initial motivation. Reducing loss of life due to human folly is a strong motivator, but there are certainly ample opportunities for that beyond just this one.
Self-driving cars have widespread potential effects across society, from shipping to taxis to car ownership to the human angle in hours saved and lives saved. This is big. Think of all the lives lost and hours wasted in traffic in the US every year. (No functional public transit in most of the country, etc.)
It's an area where the challenges are largely technical--once the technology is safer than human drivers, we assume the regulatory issues will go away quickly. (And we probably underestimate the technical issues in getting there.)
The huge potential combined with massive and primarily technical challenges makes this probably the biggest thing since the Internet where a bunch of engineers feel like we can change society in a profound way with a bit of software.
It's partly because the tech is actually so new that we can project our expectations onto it rather than focusing on what is actually here today (which is impressive but far from the goal). Weigh this potential against the reality of what most of us actually work on today, and you may see the appeal.
Of course, reality will take time to catch up to the dream, but it's the dream that generates the excitement.
Cars play a large role in America. I don't know all the history about how it came to be this way, but I can make some guesses...
* America is very big, and a lot of the settlements are spread out by a ways. Cars make those communities less insular because they provide a way to get from one town to another, where biking would take a very long time and significant effort.
* I forget when (was it the Great Depression?), but there was a big government initiative to build interstate highways connecting places together by roads. Again, these were distant places, but by being able to travel by car, they now feel quite a bit closer. Most everyone on HN was born after this gigantic network of roads was already in place, and car culture was firmly cemented in the US.
* Due to the distances involved, getting a drivers license around the age of 16 or 17 is a huge amount of freedom bestowed on children just as they really desire such freedom. I spent a lot of time in a car as a teenager not just because of where we were going, but also since it was a mostly-private space for me and my friends.
* For the above reasons and many many more, America in general has a culture that is very centered on cars, so given that a lot of HN is both American and interested in technology, it makes sense there'd be a lot of autonomous car talk here.
I wish I had grown up somewhere less car-centric. I moved to NYC specifically so that I wouldn't have to rely on cars, and I quite like the public transportation here. When I've gone back to the rural New Hampshire town I grew up in, public transportation doesn't exist and getting anywhere except my immediate neighbor's houses takes a long time via bike, and I remember why I grew up with cars.
Edit to add: America is also heavily invested in cars and the culture that follows. The train system throughout the country has been in a terrible state for a looooooong time, and there's not much hope of it ever getting better due to the fact that we're so invested in cars. Some cities and towns have better public transportation support, but most don't. Some cities and towns have better support for bicycles and pedestrians, but most don't. There are occasional pushes to change things, but there's always heavy resistance due to just how deep into cars we are, and just how many people and local governments would truly need to get on the same page to make meaningful change.
A similar thing happened in Sydney, Australia, the city had one of the largest tram systems in the world, the director of the motor trade authority was elected into government and made sure that all the tracks were literally tared over, they are still under the roads. The excuse used was that the network was over congested / too popular, and cars would solve the problem, guess what? Sydney is in the middle of putting the tram network back in to solve the car problem :) Melbourne, Australia was spared because it built it's network in the 40's and 50's and it was difficult to persuade the working class that putting in such a newly built system was a good idea. People literally move to Melbourne because of the convenience they provide.
If you're interested Bikes vs Cars (http://www.bikes-vs-cars.com/) is an awesome documentary, it shows that LA even had things called "Bike Highways" at one stage. You will see how people were so negatively influenced by big business lobbying, and from memory outlines why New York was somewhat spared from the fate.
Sounds like you're being told the same stories again either way.
When the manufacturers "can't explain" how the accident happened (after an audit was performed), they should be fined the maximum $10 million amount.
Why? Because for one assuming it's just a glitch and "they don't know" about it, then they should pay for incompetence. And two, if the car was hacked by a nation state, then their security sucks, and they should again pay the maximum amount so they have the maximum incentive to ensure digital security of self-driving cars.
Where third-party self-driving systems are involved (MobilEye, etc), the liability/fine should be split 50-50 between the car maker and the system vendor.
Give car makers these "incentives" and the other regulations are more or less pointless (other than establishing common V2V and V2I standards and whatnot). Then you'll see just how hard they scramble to make their systems safe.
EDIT: And here we go. Remote hack of Tesla Model S.
https://blog.kaspersky.com/tesla-remote-hack/13027/
We're only at the very beginning of self-driving cars. What happens when there are 100 million self-driving cars on the road? Will their security be as terrible as it is on our PCs?
People should get scared a lot faster about this stuff, before all car makers start writing their software and then refuse to write it from scratch again and just tack on to the poorly written systems new security features in the future as a response to such hacking.
Is it an accident if the driver takes control of the automated system and drives straight into a wall? Is it an accident if a non-automated truck truck with a huge branch sticking out the back obscures the automated car’s cameras, leading to a crash?
Maybe you're a cowboy coder like me who bristles at detailed project specs and prefers some sort of goal or vision to work for. I totally get that. But when it comes to accountability, nothing beats a checklist.
Bad law can be either broad and vague to the point of endless litigation and uselessness (what you're proposing), or hopelessly detailed and self-contradictory (the kind of law you're likely reacting against). This article is celebrating a good, readable middle ground. That's exactly what we want out of law.
If the system is operating autonomously, then the fault lies with the system. Failing to leave sufficient following distance is a common cause of accident and almost always results in fault being found with that driver. This would not change if the driver was a computer system.
However - side note - objects that project from the rear of a vehicle must be flagged using a red cloth. So if the truck were operating with an unflagged load then they could be the party at fault!
If the occupant takes the vehicle out of autonomous mode, then fault would lie with the occupant, unless it was to avoid some kind of impending accident in which cause the situation would have to be examined in detail.
Chances are they'd opt-out of the opportunity altogether and wait for someone else to take the heat.
If it becomes acceptable for security and safety to be secondary to "getting the cars on the road ASAP and capturing as much of the market as possible" we -- as in the consumers -- will pay the price.
I understand that some of the innovations and progress will only come when we get the cars on the road at scale, but we should still build a giant -- exactly how the comment or suggested -- to loom over the shoulders of these car companies.
Most mechanical equipment is not reviewed on a case-by-case basis by a regulatory industry; however aircraft incidents are. Aircraft products - meaning any product that is used on an aircraft, right down to the bolts attaching the overhead bins - are expected to be serviceable in "expected conditions of flight". If they are not, the manufacturer is subject to compensatory and even punitive damages [1].
Manufacturers can even be liable for their design decisions, unless the design decisions are specifically constrained by regulations. Obtaining product certification is a strong indicator that a product is compliant, but it may nevertheless expose the manufacturer to liability [2]. These are obviously difficult standards to meet, but they are appropriate when life-critical systems are in question.
[1] http://www.dailyreportingsuite.com/products-liability/news/_...
[2] http://www.mondaq.com/unitedstates/x/429650/Aviation/FAA+Wei...
> In response to the third and final question, the FAA explains that because an aircraft type certificate embodies the FAA's determination that an aircraft, engine, or propeller design complies with federal standards, it can play an important role in determining whether a manufacturer breached a duty owed to the plaintiff. The type certificate does not create a per se bar to suit, but ordinary conflict preemption principles apply to the particular design-defect claim. According to the FAA, the type certificate will preempt a state tort suit only where compliance with both the certificate and the claims made in the tort suit "is a physical impossibility" or where the claims "stand as an obstacle to the accomplishment of the full purposes and objectives of Congress."7
> Where the FAA has expressly approved the specific design aspect that a plaintiff challenges, that claim would be preempted. On the other hand, where the FAA has left a particular design choice to a manufacturer's discretion, and no other conflict exists, the type certificate does not preempt a design-defect claim. In other words, where the FAA has not made an affirmative determination with respect to the challenged design, and has left that design aspect to the manufacturer's discretion, the claim would proceed by reference to the federal standards of care found in the Act and its implementing regulations.
...
> The difficulty in applying the FAA's views on preemption to product claims lies in the fact that aircraft design specifications rarely require a specific design, but are instead couched in terms of performance or safety outcomes. For example, the certification standards for a stall warning system in a Part 23 aircraft requires "a clear and distinctive stall warning, with the flaps and landing gear in any normal position, in straight and turning flight" by a system "that will give clearly distinguishable indications under expected conditions of flight."9 A type certificate issued for a Part 23 aircraft would presumptively mean the FAA determined that the aircraft complied with these standards at the time the design was certified. However, would the type certificate preclude all product liability claims based on a defective stall warning system? What if the certification was actually wrong and the system did not comply with the standard when the FAA already said that it did? Can this type of claim actually be litigated or is it preempted?
> Additionally, what if the claimed defect was that the stall warning system did not provide a warning when operated outside certification limits such as weight, speed, or center of gravity? Are these conditions outside the "expected conditions of flight" and therefore no federal standard exists? The FAA's Letter Brief to the Third Circuit in Sikkelee does not provide clear answers in the context of product liability litigation. Courts will continue to struggle with deciding these difficult issues in the future.
Is this how you mean the owner's liability to work out? Because then I agree with you. If you want to punish Joe. Take away his savings, possibly his home. I don't see a point in that honestly.
Volvo is one manufacturer who have stated clearly that they will assume liability for the operation of their autonomous cars:
https://www.media.volvocars.com/global/en-gb/media/pressrele...