This is just a possible reason, and I'm definitely not suggesting that this is what did happen.
This is Google maps saying "go right" in the middle of a tunnel: https://goo.gl/maps/G89cyQT2APUQuu6g6 ( the two roundabouts are connected by a tunnel )
PS: If you use street view you will see how it was 3 years ago, before the construction of the tunnel
They probably already have a heatmap of driver interventions, which could be a starting point. The data might be usable for training a generalized confidence map.
The precedent is there, prior EAP did this. One interstate I frequent; it would do the interstate merge on ramp/junction fine going north; but heading south (with a 270 degree right turn) it would beep loudly and force a driver takeover.
Should Teslas autopilot get confused because of this that would be really bad.
That seems undesirable, since it would lead to rear-ending, in situations like this.
because of shadows that confused it
because of concern about cars crossing the intersection ahead where there is only the need to gently slow
because of a car crossing an intersection ahead that has already crossed!
because of a steep bridge that confused it
we can’t use cruise control in the car basically, we decided it is too dangerous at worst and jarring at least
because there is so much hype around Tesla people get the impression their software is “hardcore” but a lot of it is absolute garbage.
Intuitively when a car suddenly slows down it feels like the right response is to press on the accelerator: however in my experience this puts you in an unstable regime where you are “fighting” with the car. That is, autopilot continues to try to stop but temporarily accepts your override, but only as long as you maintain consistent pressure on the accelerator. Unfortunately if you even briefly let up on the accelerator, the car proceeds to violently slam on the brakes again. This can lead to a feedback loop where the car’s rapid acceleration/deceleration pattern makes it difficult to maintain consistent pressure on the pedal, and so the experience is like being in a rodeo. Worse, there’s no obvious way “out” of this cycle except to take your foot off the accelerator and let the car (briefly) win.
Counterintuitively, the “correct” way to deal with phantom braking is to avoid the accelerator entirely and instead dive right for the brake pedal: this instantly disengages cruise control. But this is not an intuitive response, you have to learn it the hard way.
My 6 year old car uses radar for adaptive cruise control and has only tried to (arguably) improperly stop or slow when someone crept over the line into the lane I was driving in. I have no issue turning it on and leaving it for hours at a time.
After all, a Tesla is a pretty good substitute for a horse !
Most of the time FSD just wrecks the Tesla itself or injures the driver of the Tesla (i.e. running into trees/dividers, running into much heavier freight trucks).
It will be interesting if Tesla comes in to provide monetary support for proving the legal case that Tesla FSD is not at fault or the Tesla driver (and his insurance) will be left to fend for themselves.
In the short term I could see Tesla not supporting the driver and absolving themselves via fine line/TOS, etc.
But the long term effects of not legally supporting any driver with Tesla FSD accidents will be that new customers won't trust this $10000 upsell product offering that's highly profitable for Tesla.
I could also see 3rd party (non-Tesla) insurance companies refusing to sell coverage to Tesla FSD drivers.
It could also make Tesla 1st party insurance also untrustworthy to customers and could become a huge liability for Tesla.
It seems like it will be a great litmus test to see if Tesla has the guts to step up for its own product.
[1] First video shows a potential unsafe lane change https://theintercept.com/2023/01/10/tesla-crash-footage-auto...
Autopilot has killed multiple motorcyclists, and is suspected in many other cases, totaling 19 fatalities. This isn't the first, our regulatory bodies are just incredibly slow at this.
https://arstechnica.com/cars/2022/08/tesla-faces-new-probes-...
https://www.theverge.com/2022/7/27/23280461/tesla-autopilot-...
I would love to see manslaughter charges for more accidents. If I do a whoopsie and stab someone in my home, I'm not going to get off with a "oh my god I'm so sorry! I was tired and it was foggy." People driving should be extended the same courtesy.
> In the short term I could see Tesla not supporting the driver and absolving themselves via fine line/TOS, etc.
> But the long term effects of not legally supporting any driver with Tesla FSD accidents will be that new customers won't trust this $10000 upsell product offering that's highly profitable for Tesla.
Tesla publicly disparages people who died relying on their products, and refuses to cooperate with the NTSB. I'd expect nothing less in this case. Somehow that hasn't been a big factor in sales.
It seems the take rate of FSD for new Tesla purchases is not as high as it used to be - perhaps due to the increase in price and other Autopilot-FSD bundling/unbundling aspects - but also perhaps due to negative press from the accidents thus far. [1] Definitely something to watch as/if the accident incidents accumulate.
[1] https://twitter.com/troyteslike/status/1586356451639189504?
This is completely false. Tesla is legally required, and complies every time, to release crash data to the NTSB. This data includes whether or not self driving was enabled.
> Tesla publicly disparages people
Tesla refutes that self driving is enabled when people lie about it. I am sure there is an incident or two of someone being sassy about calling out these lies but there is no trend of "disparagement"
Like this one - if driver would stop sudden breaking and moved forward, cars behind him could have still crashed.
That’s the thing about testing on the public roads - there are many ways you can affect other users.
In a pileup like this it's basically never the fault of the front car, unless maybe if they are purposely causing the accident for insurance fraud or something. Maybe the driver will get cited for failing to maintain the minimum speed, but legally this isn't much different than if someone backed into the Tesla while it was parked in a parking garage.
That video looks like a combined lane change and brake check on the part of the Tesla.
* and disappointed
The grey area will require some defense and it will be interesting to see if the Tesla driver is left high and dry by Tesla.
Quebec woman who stopped for ducks, causing fatal crash, loses appeal
https://www.cbc.ca/news/canada/montreal/emma-czornobaj-loses...
Because typically the car in the front stopped or slowed for a reason that does not violate any rules or responsibilities. But when they have neglected to follow rules, or uphold responsibilities, then they can share fault.
Generally speaking, drivers in the US have a legal responsibility to pay attention to what is going on and operate their vehicle with care.
Considering that the police report evidence includes the FAQ page from Tesla for the question “Do I need to pay attention while using autopilot?”, I think it’s clear what direction they’re going here.
From the police report:
> V-1 made an unsafe lane change (21658(a) California Vehicle Code) and was slowing to a stop directly into V-2's path of travel. This caused the front of V-2 to collide into the rear of V-1 (4.0.1. #1). P-2 did not have enough time to perceive and react to V-1's lane change.
V-1 = The Tesla
> P-4 observed V-3 stopping and applied V-4's brakes. V-3 came to a stop to the rear of V-2. P-5 observed V-4 stopping and applied V-5's brakes. As V4 slowed down, P-4 steered V4 towards the #2 lane. Due to P-5's unsafe speed for stopped traffic ahead (22350 California Vehicle Code), P-S failed to safely stop behind V-4 and V-3. The front of V-5 collided into the rear of V-4 (A.O.L #2). V-4 moved into the #2 lane without colliding into any other vehicles. V-5 came to a stop in the #1 lane after colliding into the rear of V-3 (A.O.L #3).
and it goes on from there...
https://www.documentcloud.org/documents/23569059-9335-2022-0...
Brake checking (what the Tesla did) does definitely make the front car the guilty party. It's usually done for insurance fraud, here presumably done just by AI gone mad. But same result and same guilt.
The second car had left a more than adequate stopping distance. The Tesla changed lanes close in front of it and then immediately braked as hard as possible, deliberately. The driver of the Tesla should lose their driving licence.
The drivers following the second car weren't leaving enough distance or paying enough attention.
At that point humans will theoretically be the weakest link, and anyone driving "manually" will be a liability because they will lack the information and reflexes to deal with whatever is happening around them in a timely manner.
I was thinking today about the Southwest disaster, not only for customers but for the company’s reputation. But I know a great way to win it back: cash. Promise it won’t happen again, but if it does, offer best in industry cash compensation. Prove that your company gives a shit. I will be very disappointed if they expect time alone to heal this.
Imagine there is an 'autopilot' gun, you buy it, and it comes with the contract that says you take full responsibility got the gun.
Then it shoots me and kills me before you have a chance to react.
The prosecutor will go after the manufacturer. If manufacturer wrote code that kills me, you and any contract you signed is not even relevant.
You cannot contract away criminal responsibility. Otherwise I could contract away all my responsebilities to a random homeless guy.
https://en.m.wikipedia.org/wiki/Protection_of_Lawful_Commerc...
Interestingly, the article is careful to say that the driver "claims" it was on FSDbeta.
More to this story.
Auto Lane Change
To initiate an automated lane change, you must first enable Auto Lane Changes through the Autopilot Controls menu within the Settings tab. Then when the car is in Autosteer, a driver must engage the turn signal in the direction that they would like to move. In some markets depending on local regulations, lane change confirmation can be turned off by accessing Controls > Autopilot > Customize Navigate on Autopilot and toggle ‘Lane Change Confirmation’ off.
We've entered the age of 'blind trust in technology'. We can hardly get out of bed without it.
In Tesla's case, the failure is that they are testing a new way of doing things without properly saying it, Muskito is pushing for "it works" while it actually is only being in the early stages.
This is a failure to society, like the 737 Max and should be judged accordingly.
Then it comes innovation. Which necessarily starts with limited cases. And we use to launch business and products as soon as we can. Hence, with the minimum acceptable cases. This is fine as long as failures cause minor reparable degradations instead of catastrophic unsurvivable cases.
And still, we need systems designed with every layer having the feature of graceful degradations. That enables us to perform as Woody and Buzz Lightyear said: "Falling with style"
I do use it, but less than I did the old system: I just do not find it relaxing because I cannot really grasp intuitively when I need to override it. On standard cruise control, it was obvious to me when I needed to take over. Therefore I am more rather than less vigilant than I was with the old system.
I don't want to be too hard on the Golf: it has other safety features I really like, such as lane assist and automatic breaking. But I am not a fan of the adaptive control, and I think the article help me understand why: its a level 2 problem!
And if you get too far behind him you might bump up the speed a notch.
And if somebody gets in the gap it's even better because they are going to eventually overtake him just like they overtook you and overtaking one car at a time might be safer than two cars at once.
Even a 1 mph differential in speed is a car length change in following distance every 10 seconds. A 1/2 mph difference is 3 car lengths of change every minute. That seems like enough that would be annoying, but it obviously wasn’t when two cars are following each other both on cruise control.
I still don’t have a fully satisfactory explanation for something that I can easily observe (that it’s usually not annoying).
this, so much
> how you were really able to use cruise to follow another car at all before?
not op, but you probably just don't, you disengage, cuss out loud and follow them manually until they come to a decision to stop hogging the lane.
PS that's also why I hate the newer-style blocked-off HOV lanes here in sfbay, which seem to be a total trap and a magnet for idiots who drive 30 miles below the speed limit.
That said, I have the almost opposite take! The adaptive CC wasn’t perfect, but it had a perfect failure mode all but one time: it just stopped working at all when its sensors got fudged in any way.
The one exception was a near pileup when drivers ahead of me came to a sudden stop in the middle of an urban ramp soup that’s always high risk. Even then it did the right thing: it screamed (beeped) at me to brake, because I needed to control that and it had no safe way of doing it for me.
I’m similarly more vigilant with assisted CC, but I’m more inclined to use it because it makes me more vigilant. I almost never used a plain CC without it because it’s too easy to get lazy at the wheel and make mistakes. Constantly checking on the known-limited computer kept me alert but let me rest my legs for ~10k miles over a month and a half.
And it saved my ass on a sudden ice patch in Nevada, which was the other time it didn’t work exactly as expected. I don’t drive with a computer in control, I only delegate to it for menial tasks like predictable speed adjustment. But I’m grateful I had the computer to help me regain traction coming around an icy bend when the alternative was me and my pup and everything we had in tow were going to roll off a mountainside.
On the other hand, I found the dumbest 'cruise control' mode on T3 downright dangerous, you always had to have foot on the gas to be ready to override it. They just took something that was solved a long time ago and made it worse, I really do not get it.
I heavily use adaptive cruise control and I've never had any issues with it, though I'm always ready to take over in case anything complicated starts happening. It's a nice feature because it saves a lot of energy when the driving is very easy, and then you just drive normally otherwise.
I would think the manual likely said not to use it while towing. I’m not sure why one would be afraid of it when used correctly because it didn’t handle something it wasn’t designed to well.
Ta-da! 1 pedal driving :-)
Older "dumb" cruise control you had to be ready for the brake at all times — there was no need to second guess the system.
When I get to the inevitable slowdowns, my car just slows without me having to think much about it unless it’s a sudden one.
I also have a Golf with standard cruise and I can’t stand using it since most drivers around me can’t maintain a constant speed, so I am always adjusting it or canceling it.
That seems bad!
With ACC I can decrease the amount of foot shuffling and just rest my foot next to the brake pedal in case something happens up front. My car model comes with emergency braking that can be toggled off but it worked perfectly one time a driver slowed down unexpectedly (alerted me first, then braked).
In slightly heavy traffic conditions on the highway it was unpleasant/confusing.
Vigilance is hard to moderate when you start automating too much.
This means that cruise control goes from being something that relieves stress on you to something that causes additional stress.
Takes a moment to moment mental load off so you can focus on the other cars around you better.
If the cars ahead of me are slowing down significantly I disengage and go manual until I have a better idea what’s going on.
My car can supposedly stop completely that way. I’ve never tried that, as it gets very slow (relative to highway) it’s willing to get way closer to the vehicle in front me than I am. I trust it (in theory). But it’s not worth testing.
Also FTA quoting Tesla (https://images-stag.jazelc.com/uploads/theautopian-m2en/repo...):
> It does not turn a Tesla into a self-driving car
Is it self driving or not?
Seemingly Tesla is copying this, with "Self-driving" doesn't actually mean the car will drive by itself, but that the driver can drive "less" compared to before.
Deceptive marketing at best, fatal at worst.
Oh and it's not self driving.
If the Tesla was at the back or the middle I think it's highly likely it would have outperformed the other drivers and stopped faster or kept a safe distance.
A series of problems lead to this accident, FSD Beta gets 5% of the blame here for stopping on the highway, which isn't an exclusive feature to Tesla - cars stop on roads all the time.
What this demonstrates is 7/8 or 8/8 drivers here were driving unsafely.
I think this is the the most important point of the article and largely ignored here in the comments who seem to focus mostly on who was to blame for this specific accident.
We know the strengths and weaknesses of both humans and tech at this point in time. Humans are overall better decision makers but aren't 100% focused 100% of the time. Tech gets confused a lot but is never tired or inattentive. So if your goal is safety you let the humans drive but take over in emergency situations when the human is not reacting. Which is what most car manufacturers do right now. Letting the tech drive and expecting the human to provide perfect reaction time every time the tech fails is playing on the weaknesses of both. This is focusing on cool marketing at the expense of safety.
This isn't even uncommon. Almost every Honda sold for a while has been a L2 system that will take over in certain ways if the car believes a crash is imminent, such as a car suddenly breaking.
> but would be less sexy
It's so less sexy people don't know that millions of vehicles are sold this way...
I do not understand why the company has not already been sued into oblivion for an obvious lie that has killed people.
Tunnels and underpasses are the worst. They are a pain in the ass, because shadows mess with all the edge detection and motion models and anything else visual. Humans compensate by thinking "I'm in a tunnel: things are weird." But without a reasoning model that can take context into account, the computer is stuck.
In the video from behind, you can see the shadow ahead of the car on the floor of the tunnel that it carefully stops just before it would hit. A person would notice that EVERY OTHER car had driven straight through the thing it thought was an obstacle, but that is also context this car isn't going to take into account.
I worked on autonomous vehicles (in vision) at Daimler in 1991. During one of our test sessions, on drying pavement, the vehicle abruptly slammed on the brakes and refused to proceed past a point where the vision system could see a symmetric about a centerline set of horizontal edges on the pavement. It tightly fit our (hand-coded, being 1991) model for a car ahead. We had to revert to manual control and drive back to our staging area and wait for the track (set of runways and taxiways at a disused airbase) to finish drying.
Obviously, the state of the art has significantly improved since then, but some fundamental risk of misinterpretations could easily remain.
This wasn’t going straight on a highway stopping in the same lane the driver was already in.
I can understand how the conditions probably made it worse.
The tunnel is not curved. it's straight.
The Tesla car switching and abruptly stopping was a big problem, and I can understand that first car having trouble, but for cars behind the first one they should ideally all be keeping a distance ready for suddenly hitting the brakes in an emergency. I find at least in Australia a decent number of drivers do not keep an adequate distance from the car in front for emergency braking.
I’d even wager if that pickup hadn’t swerved and everyone just slowed down it wouldn’t have piled up
Side tangent, I love watching car crash videos. Really interesting to see how the system breaks down and people make mistakes. I spend hours on YouTube sometimes :)
Your and my YouTube feed algorithms....
They kind of creep me out when it is a crash that was not avoidable. But, boy, most of the crashes have me talking to the computer display, "What are you thinking? You have no stopping distance at all, I know exactly what's coming up in about 3 seconds!"
It's worth anticipating this by watching the other lanes and what drivers are doing. If you can see a slow down happening in an adjacent lane then you might reasonably expect at least one of the drivers approaching to pull over in front of you.
TL;DR - Sometimes the 3 second following distance is just not enough even if someone is paying attention because they can only see the car in front of them.
That's the thing, though - if you were already at the distance limit, and the car in front of you starts slowing down, you have to also start slowing down right away to maintain said limit. If you do not, then it's already "not enough distance" by definition at that point, and you're the one responsible for that.
Yep... the usual advice locally is 5 to 7 seconds gap for freeways and similar, before accounting for weather and other conditions. Of course, that assumes you can leave a gap without some bastard deciding to sneak in and occupy it.
Completely autonomous self-driving cars (without any steering wheel, so even incapacitated or clueless people may 'drive' (like drunk or in labour or children)) indeed seem like a good solution. (Except we need less individual traffic for env reasons.) Unfortunately, the problem is very hard, technologically, and the current interim solutions will stay for a while.
[1] In fact, this is so unlike FSD's behavior that I still think it's more likely that it will turn out not to have been in use at all. The only evidence at hand is one sentence in a police report that the police themselves state was unvalidated. How easy would it be to blame the car as an excuse?
In any case, I am not taking an anti-self-driving stance by any means - we'll get there eventually. Tesla is leading the way and taking all of the negative press.
Not that FSD Autopilot is what it's marketed as, but this is the responsibility of the driver and not the car.
Since fighting for release of this video and publishing the story, Ken Klippenstein has been censored on Twitter through shadow bans and inability to find his profile through search.
While automatic cars doing random things is certainly problematic, clearly the cause of the crash here isn't the tesla, it is other cars not respecting minimum safety distances and not able to stop when there is a traffic jam ahead.
If a human driver did it deliberately as the Tesla did (since it was designed to pull over and stop) then I would consider it a criminal level traffic violation on the part of the stopping car.
Drivers are indeed expected to always be prepared for sudden stops even at the highest speeds.
If it had been a large deer, of course he should have stopped, at that point it's the safety of the people inside the car.
In law for insurance purposes it needs to be clear cut, person behind is almost always at fault. But that doesn't mean they are the cause of the accident in all cases. There is nuance to these things, and part of that is that braking for a rabbit, or using Level 2 automation, is increasing the chance of an accident happing on the road.
If the way you drive increases the risk of someone driving into the back of you, even if they haven't left enough space, you are at fault in my mind.
For example, pedestrians crossing a bike path. Because a lot of people clearly don’t walk often they will just walk out without looking. People aware of their surroundings will look both ways. As soon as you see someone do that you can pass close to them without spooking them because you know they’re aware.
My point here is a ton of this comes down to acting predictably. Even a simple act like looking at someone will alleviate a ton of uncertainty.
The barriers to fully autonomous self-driving are huge and not necessarily technical. Acting predictably, being able to explain actions, drivers driving differently because another vehicle is automated and cultural differences.
Intentionally stopping inside a tunnel is a pretty clear cut case of dangerous driving over here.
My driving school teacher used to say: always remember that the car ahead of you could suddenly stop at any time for a reason that you might not know.
I still suspect it would be classed as dangerous driving as there wasn't a need to stop and they did find a safe place to stop.
If it did suddenly brake though and it was the car, that seems like something Tesla should be liable for. The time taken for any driver, when a car starts automatically braking, to assess the situation and override isn't going to be enough to avoid the dangerous situation.
Can you provide an example of someone getting a life sentence because of an accident caused due to braking? That doesn't sound right.
https://www.forbes.com/sites/bradtempleton/2023/01/11/an-8-c...
Conversely if you need to stop suddenly, e.g. something has crossed the road in front of you (or you think it has), you don't worry about the vehicles behind, you just stop.
People often forget about the most important traffic rule: it is not allowed to cause an unsafe situation.
But you would need to prove that incorrect automatic breaking is intentional, which has not been proven in any court I know of.
The second video here clearly shows it crashing into the Tesla:
https://theintercept.com/2023/01/10/tesla-crash-footage-auto...
> While automatic cars doing random things is certainly problematic, clearly the cause of the crash here isn't the tesla
The Tesla changed lanes, moving in front of the second vehicle, and immediately applied the brakes. That is 100% the Tesla's fault.
The cause of the crash is the Tesla. You are not allowed to stop on bridges, in tunnels and several other places. The crashes starting with the Nth car and not the 1st is normal. Reaction time of the first car eats into the reaction of the second and so on until there's no more time to stop. Understand that cars further back do not see the car in front of them applying the brakes and slowing down, they are seeing a car moving at normal speed instantly crashing. Minimum safety distance is not as big as reaction time plus stopping to zero, that would be huge.
Try to park on the highway and claim the people who crash are at fault and see how it plays in a court of law
The Tesla autopilot failure is really bad, for sure, but those human drivers should be banned for life. There’s no excuse for ramming into a traffic jam because you weren’t paying attention.
Edit: occurs to me that possibly I’m being overly harsh here. Is there something about the dynamics of traffic that puts the cars three or four slots back at greater risk when somebody unexpectedly stops? I would assume that the immediately following car is at the greatest risk, but after that everybody is at successively less risk as they should all be slowing and should all see each others’ brake lights.
The first driver has some reaction time needed so he’ll start breaking after the Tesla starts breaking. Which means, assuming they were at the same speed initially, that the 1st car will have to break a bit harder than the Tesla.
Then the second car will be in the same situation, it will have to break a bit harder than then first. 5 cars later, you are at the hardest possible breaking power, as it’s ultimately limited by adherence.
So, if there’s not a larger gap in the thread of cars somewhere to allow for breaking less hard than the car in front, it’s more or less inevitable.
An autonomous car can travel on both normal roads, and in an autonomous lane. Roads are everywhere -- there's a road leading up to most people's place of residence.
Trains rely on tracks which are not as common as roads. Trains are also not owned by individuals. Most people do not have a train station outside of their house.
If you had a train you could drive your car onto it would be more equivalent.
Also train lines have far less coverage than highways.
Public transport is great, but you aren't going to convince others to support it by pretending it doesn't have any drawbacks compared to private transport.
Thanks for cutting through the meandering, pseudo-intellectual cruft with an answer than makes too much sense.
Lanes are really expensive, it's unreasobable to have an autonomous-only lane, so a shared lane is base. I propose that vehicles in autonomous node be restricted to the slow(est) lane, where it's safer to come to a sudden stop, and the driver, manufacturer and insurer have to contend with the probability of being flattened by an 18-wheeler.
Lanes for busses and vans? Yes. SDVs? Maybe in several decades, and they should still only be for HOVs only.
Let’s wait to get more info as to what the driver was doing and if they were incapacitated.
The FSD beta is pretty aggressive on making sure the driver is paying attention via both steering sensors, the in-cabin camera and the touch screen.
You can see them get out of the vehicle and walk around a bit in the video[0], so I guess they were not incapacitated - see around the 46s mark in the video. The driver also claimed they were using FSD at the time the car performed the maneuver.
[0] https://www.reddit.com/r/teslamotors/comments/108kpgo/footag...
If that is its best then FSD has no business being on the road.
It was erratic and didn't give clear intent to other cars what it was trying to do.
So the first car to hit it may have had a very safe distance between it in the next car, but the Tesla cut that significantly with its sudden maneuver.
But ultimately the behaviour of the Tesla FSD system is what caused the chain of accidents.
This is incredibly true and obvious.
It's likely people monitoring their car aren't doing anything at all: sleeping, reading a book, whatever.
But if anything, actual monitoring is harder than doing: it means actively watching what the machine does, understanding it, thinking about what one would have done, doing a diff and analyzing it, all in real time, all the time. It's exhausting.
That a situation like this is even legal is amazing.
It's as if they expect the speed limit sign to adapt to the circumstances. They have the attitude, "Well if I need to slow down, why does the sign say I can go 70mph?"
It could be freezing rain with black ice or a fog with zero visibilty, and the overwhelming majority of American drivers would not change their driving one bit. I have seen this in every corner of the country and it is noticeably different from other countries.
There will always be unexpected obstructions in the roadbed of major highways. Stuff falls off. People lose control and roll and end up obstructing the lanes.
So while the Tesla and its self-driving mode were the proximate cause of the obstruction that led to the collision in this particular instance, they were absolutely unrelated to the actual cause of this entire category of accidents.
This attitude affects many other aspects of American life. Notably, gun violence. Exactly the same feeling of entitlement to go full speed ahead and damn the consequences dictates the occasional outcome of both highway driving and eating your lunch quietly in the school cafeteria.
There is little difference between an American in a pickup truck barreling down out of the Alleghenies at 80mph in fog and freezing rain and and American with a not-meaningfully-regulated gun barreling into a classroom and opening fire.
Both are the direct result of a sense of entitlement, and both regularly lead to mayhem and death.
You can't blame the Tesla for this.
It's so frustrating how mind-bogglingly polar everything has to be. Nothing in your entire post precludes the Tesla from being partially at fault. No matter how long you go on about how reckless American drivers are (and they are), none of it serves as one iota of evidence as to whether or not the Tesla behaved reasonably or whether its features were advertised properly. Why is absolutely everything either an admit-nothing attack on our "enemies" or an allow-nothing defense of our "allies"?
Sure you can, because this was 100% the fault of the Tesla. The SUV did nothing wrong. Nothing.
As for the safety distance: This is a valid point. If the Tesla has not kept a safe distance to the following car when changing lanes, there is a problem with the automation. It should always be as defensive as possible. I don't see that from the police report, but the video shows the distance to be rather short, maybe two car lengths. Call it 10m. The second car doesn't really seem to brake very hard (no emergency braking assistant?) and that crash is definitely on the Tesla as it cut the distance, unless the speed limit in that tunnel would be around 20kph or so. But now it gets interesting: The third car does stop successfully. The big pileup happens afterwards.
In conclusion: The big pileup happens because the third car suddenly stops - and rightfully so but the following cars don't keep their distance to that third car. The Tesla getting rear ended is just a lazy excuse for these drivers that simply drive recklessly.
Abolutely nothing was in front of Tesla, nothing could have loose the load.
The difference is that the humans in the cars behind will be able to see that too and anticipate the reason for the braking. In this case, the Tesla just changes into the leftmost lane and then brakes very quickly to a stop for no apparent reason. A human doing the same thing would almost certainly be at fault too.
You can see on the video that it's not just suddenly stopping, it's changing lanes (with what appears to be very little room from the car behind) and then suddenly stoppping.
The Tesla safety report which the company boastfully released wouldn't have counted this incident since the Tesla itself wasn't harmed.
Stuff like this happens all the time. Pile ups on highways have been a regular thing for as long as highways have existed. The pattern is always the same: something unexpected happens in front of somebody and they fail to respond or they over react. The problem recurses behind them as more vehicles become part of the problem. It doesn't matter if the problem at the front is real or not. The actual problem is that drivers further back are not prepared for a thing that should be considered something that can happen at any time.
The car in front of you might have all sorts of reasons to suddenly slam the brakes. You have no way of telling when they will do that because they are blocking your view to what's in front. There might be an unseen obstacle, engine failure, fog, a hole in the road, whatever. You have no way of telling until the brake lights in front of you come on. It doesn't actually matter why they slam their brakes or whether there even is a good reason for them to do that. You have to be ready for that eventuality and the appropriate response is going to be slamming your own brakes. And the car behind you had better be ready for that too.
Obviously we had a cascading failure of multiple drivers not being ready for that here. And they are dodging their own responsibility by pointing to the problem in front of them instead of the idiot behind them, or themselves. The second car obviously wasn't the problem as they avoided crashing into the Tesla. Good job. The Tesla is fine. It was the driver behind them that was the real problem.
The irony of human failures like this is that taking them out of the loop might make things safer. The Tesla stopped. It doesn't matter why. Human drivers behind it failed to act. Would that have happened if they all had FSD on or would we just have had a weird traffic jam occurring for no good reason at all? That too happens all the time.
The automated cruise control and lane-keep on my previous car, an Audi, would occasionally do crazy things too. The hyper-focus on this being Tesla's fault continues to baffle.
Additionally, how can you blame the driver? Did you watch the video? It's not as if the car slowly stopped. It slammed on the brakes. There wasn't time for the driver to correct for this before the car behind -- which Autopilot had just moved in front of -- slammed into them.
Tesla's Autopilot and Full Self Driving get a lot of focus and criticism for two big reasons; 1) their product names clearly and deliberately paints a picture that the features do not live up to and disclaimers do not fix that. 2) they are unsafe on public roads as they operate today. They are a menace to public safety.
I think this is the thing that makes the most sense. But if I were to bend over backwards giving the benefit of the doubt - is it possible the driver feared resuming the previous speed because they were worried the car saw an obstacle that they couldn't?
Like the article suggests - if you're not fully engaged, it's as if you're driving impaired. And there's so much automated that it's difficult to be so vigilant. Once the car starts braking, how much latency can I afford just assessing what caused it to slow down? How hard is it braking and how closely is traffic following me?
What is the reason I should pay all of this extra money and still have to maintain full focus and attention on the road around me at all times? What is the market for it unless it's for me to be able to zone out more while driving?
Umpty-years ago I worked on robot tanks for the US Army, and we had a devil of a time answering this question. If the driver still has to be fully monitoring a single vehicle at all times, why isn't s/he driving it directly? What benefit does this not-quite real self-driving provide? Our soldiers told us that it mostly got in their way, because they wanted it go over here and the computer wanted it to go over there, and so they felt like they were fighting the computer more than anything else. It is possible that someone will come up with a better adjudication than we did, but it really made me question whether any level between 2 and 5 is actually useful to anyone.
Until "self driving" is good enough that you're legally allowed to ride it drunk, it should not be allowed to do anything more complicated than radar cruise and lane-keeping.
Other self driving implementations e.g. Cruise, Waymo have LiDAR which is an additional and in my opinion necessary sensor for detecting objects.
Because in this case it sure looks like a phantom breaking issue.
On the other hand the sooner we get FSD cars, the sooner we will also get ASI and with it potentially the end of human kind.
Like, how do people reason about this? It seems uniquely American to me.. These videos of people simply driving into stuff that's in plain sight right in their lane? What do they do if there's a fallen tree, tire, shopping cart? Simply pile into it instead of braking?
I'm wondering because, in my country, when you drive a car, you're supposed to pay attention to the road, and always be ready to break if something gets in your way ? But in USA, the thinking seems to be "I'm not driving too fast, I have the freedom of way here" ?
As for the article, all points are entirely valid! Systems need to either be fully autonomous, or require some level of constant engagement.. This is kind of analogous to the previous stuff written about the deskilling of labor and automation.. It's an impossible position to put someone in "this works in every easy case, almost never fails, and when it fail, it might get arbitrarily complex and require the exact skills that the operator has very rarely any opportunity to practice"
The way this comes about has to do with the suburban model. When you're in an environment where traffic is commonplace, you can drive defensively or aggressively. During traffic crunches, defensive drivers are completely walked all over by aggressive drivers, and will easily double their time sitting in traffic. Leaving a safe distance means leaving room for people to dangerously merge.
This leads to a nash equilibrium of aggressive driving, and now you literally have Americans criticizing the idea of "brake-checking" tailgater as a way of signaling their displeasure at being tailgated (literally just tapping the brakes to slow down).
It's a completely messed up system, and it terrifies me that it seems to be spreading to other countries.
A six year old behind the wheel sitting in your lap.
That's also known plainly as "illegal".
I have no fucking idea why this is being tolerated on our public roads, nor why SAE has even classified these child-equivalent levels of autonomy as if they're acceptable.
Well, you could have extremely visible/audible signals for the driver to take over when the driving system is failing (internal lighting becoming red and blaring alarms), but that wouldn't be very popular I guess, especially with half-assed self-driving systems.
-- diff feature - still - eeek --
Person: lol how are you gonna pay people enough to test something that might endanger them?
Tesla: no they will pay us for the privilege
> then I’ll see a crash like this, [...] and I realize that, no, people still don’t get it.
That's just an example. You can't make conclusions based on a single example, you need lots of data. Your feelings that this is less safe than regular drivings are not enough to justify your conclusions that L2 is less safe than regular driving. I have the feelings of the opposite (I think it reduces the number of accidents), but I don't know wether this is true. Accidents that were avoided by L2 don't make it in the news, there's a huge selection bias here.
I think this kind of article might be very dangerous, because it makes people more afraid, and if L2 is actually safer on average, then reducing its usage will just increase the number of accidents.
So you can switch on your hazard lights, gradually slow, let cars past and then pull over.
No normal, competent driver would act like this incident.
1. How many fatalities per mile does tesla FSD have, compared to its best alternative if we ban it?
2. Focusing on specific failure cases is not relevant since the political decision to allow it is all or nothing.
3. It'd be perfectly logical to accept road testing FSD, even if it's significantly worse in some areas, as long as this is not exploitable, AND the net gain from allowing it is still overall positive.
I'd like to hear reasonable disagreements like "Tesla isn't actually all-in net positive" or "Here's why we should judge policy by something more than just net lives lost/saved".
Edit: the discussion fragments into two versions depending on the state of facts, which isn't clear:
A) Tesla FSD is not actually safer per-mile. If this is the case, I and most people would probably agree not to allow FSD on public roads. That's not really what this is about.
B) Tesla FSD is actually, on net safer per mile, but we should not allow it anyway.
I'm welcome to hear other options, too, but while replying I'd like to hear which one you think.
And that's daily. It's only every few months that we see a story like this, and this one had no fatalities.
It's totally fair to criticize Tesla, Autopilot, and/or FSD, but these discussions always implicitly accept that the status quo is acceptable, and whether or not this technology should be allowed is debatable. The status quo is an unimaginable amount of human death and injury that we accept only because it's been normalized.
Delaying self-driving cars by one year comes at a cost of up to 1.35 million human lives [1]. That doesn't excuse failures like these, but the context should be understood.
[1] https://www.cdc.gov/injury/features/global-road-safety/index...
I don’t agree with that assessment.
It needs to make fewer mistakes than humans at all times, not just most of the time. Great with a disaster here and there is terrible.
This is a discussion that comes up a lot in medical technology, and if I had to guess why this form of rhetoric fails is not just because it's easier to empathize with a human, but also because when the failure case seems "simple", the issue seems a lot more like an oversight and systemic issue. That in turn makes the probability of failure probably look much higher than it really is, while also implying further undiscovered oversights.
That also kinda addresses point 2, in that, specific failure case or not, it implies a weak system with obvious oversights. That definitely doesn't help the political case for complete FSD approval.
I'm generally skeptical of this utilitarian, rationalist form of rhetoric, if only because it's overly optimistic about the ability to overcome issues in some amorphous future. Sure, a future with full FSD is probably a net good, and we can even say it's probably in within our lifetimes, but the claim that future mitigated harms outweighs all current harms of live-testing FSD won't win enough people over, and drowns out other possible policies like advocating for adapting our infrastructure to support FSD rather than having cars attempt to read signs and signals designed for humans.
Most people, especially non technical people, aren’t going to care about numbers that say it’s safer or better on net when the car still occasionally does things that humans view as crazy or stupid.
So are all 20 manufacturers going to get the right to endanger my life just so they can train their self-driving? Or is this spesific to Tesla?
Do I and my family get some compensation for this danger, or in case we are killed? Do we get a share of the profit this self driving car will generate for Tesla ?
What if Tesla never produces a good self deiving car, goes bancrupt, then I died for nothing?
Defensive driving was one of the first thing I learned, before even getting my drivers license, how come others seem to flat out ignore things like that, even for their own safety?