The problem is Tesla owners repeatedly see "recall" to mean "software update", so this might lead to a lot of confusion if a physical recall is actually required in the future.
True, but by announcing things in this fashion it is making Tesla look bad. Regulations really need to be updated so that car makers can hide this type of problem from customers as easily as possible. Especially when it comes to Tesla, regulators really need to bend over backwards to prevent articles from being written that could be interpreted in a negative way.
Or are people concerned about the word "recall" for a different reason?
It should absolutely be tracked and publicized. But it's fundamentally different than "this car is fundamentally broken and you have to take it back to the manufacturer"
Some things are absolutely safety relevant. But no one cares.
>> "...The FSD Beta system may cause crashes by allowing the affected vehicles to: “Act unsafe around intersections, such as traveling straight through an intersection while in a turn-only lane, entering a stop sign-controlled intersection without coming to a complete stop, or proceeding into an intersection during a steady yellow traffic signal without due caution,” according to the notice on the website of the National Highway Traffic Safety Administration..."
No, that's not correct. Whether it can kill people or not is orthogonal to whether it's a true physical recall of the car or a software update.
...and that's one reason why I would never purchase a Tesla.
A recall means exactly what it means. The manufacturer is responsible for a fix. If for whatever reason they can't push an OTA update to your car, Tesla is still responsible for sending you a postcard in the mail and calling you every 6 months telling you to bring it in for service until they have reasonable evidence the car is no longer on the road.
The headline of that many cars being "recalled" is of course nice clickbait. Much better than "cars to receive minor software update that fixes an issue that isn't actually that much of an issue in the real world so far". One is outrage-ism fueled advertising and the other is a bit of a non event. It's like your laptop receiving some security related update. Happens a lot. Is that a recall of your laptop or just an annoying unscheduled coffee break?
"Voluntary recall" in this case means that Tesla did not choose to take the hard route where there's a court order for a mandatory recall. Few manufacturers fight that, because customers then get letters from the Government telling them their product is defective and that it should be returned for repair or replacement.
Somebody in the swallowable magnet toy business fought this all the way years ago.[1] They lost. It's still a problem.
[1] https://www.cpsc.gov/Safety-Education/Safety-Education-Cente...
A voluntary recall is easier and cheaper for all involved.
OTA updates are cool but they more complexity to the car. I like my cars to be simple and reliable instead.
Customers aren't dealing with inconvenience of having to bring their vehicle in / not having it for a while.
And Tesla is not incurring the cost of physically handling and fixing 360k cars.
So while it's technically a recall, the impact on the consumer and manufacturer is very different than what the word brings to mind.
https://www.nhtsa.gov/sites/nhtsa.gov/files/documents/14218-...
The official meaning of a recall is providing a record of a defect, informing the public that the product is defective, and making the manufacturer financially liable for either remediating the defect or providing a refund. However, the colloquial definition of a “recall” now means a product must be physically returned.
To better represent the nature of a “recall” they should instead call it something like “notice of defect”. In the case of safety critical problems like here they should use a term like “notice of life-endangering defect” to properly inform the consumers that the defect is actively harmful instead of merely being a failure to perform as advertised.
tl;dr They should change the terminology from “recall” to “Notice of Life-Endangering Defect”
Lets be real here, automated driving or not, having actually save roads helps prevent death and harm in many cases.
The hyper focus on high tech software by all the agencies engaged in 'automotive security' is totally wrongly focused. What they should actually do is point out how insanely unsafe and broken the infrastructure is, specially for people outside of cars.
See: "Confessions of a Recovering Engineer: Transportation for a Strong Town"
Even with the largest possible investment in rail, cars will exists in large numbers.
So yeah, rail and cargo tram ways in cities are great. But we can't just leave car infrastructure unchanged.
Specially because existing car infrastructure is already there and cheap to modify. Changing a 6 lane road into a 3 lane road with large space for bikes and people is pretty easy.
Transportation safety is important but it shouldn’t be considered in isolation when there are potentially catastrophic consequences to prioritizing safety at the cost of everything else.
They are not serious. The rails are a great idea, along with drive-able 'rail' cars for individuals.
We have a significantly higher number of derailments. Even the worst European rail is more safe than US rail.
Road systems should always be worked on, but when a crash happens its usually the drivers fault, except in a minority of cases where bad road engineering is to blame. this self driving is fucking up enough that it cannot be blamed on the roads anymore, if it ever could.
Well yes, and that is literally exactly the problem. That is exactly why its so unsafe in the US. Because instead of building a safe system everything is blamed on people.
In countries that take road safety seriously, every crash is analyzed and often the road system is changed to make sure it does not happen again. That is why places like Finland, Netherlands and so on have been consistently improving in terms of death and harm caused by the road system.
Again, the book I linked goes into a lot of detail about road safety engineering.
> We suspend the drivers license until they can prove they are capable of being a safe driver.
An unsafe designed street often leads to situation where even good drivers intuitively do the wrong thing. Again, this is exactly the problem.
If you build a system where lots of avg. drivers make accidents, then you have a lot of accidents.
> We should hold this software to the same standard. Until it can demonstrate safety at or above human level, it should be outlawed.
Yes, but its a question of how much limited resources should be invested in analyzing and validating each piece of software by each manufacturer. In general software like Tesla AP would likely pass this test.
I am not against such tests but the reality is that resources are limited.
> Road systems should always be worked on, but when a crash happens its usually the drivers fault, except in a minority of cases where bad road engineering is to blame.
I strongly disagree with this statement. Its a totally false analysis. If a system is designed in a way known to be non-intuitive and leading to a very high rate accidents then its a bad system. Just calling everybody who makes a mistake a bad drive is a terrible, terrible approach to safety.
Once you have a safe road system, if somebody is an extremely bad driver, yes taking that person of the road is good. However in a country where so much of the population depends on a car, that punishment can literally destroy a whole family. So just applying it to anybody who makes a mistake isn't viable, specially in system that makes it incredibly easy to make mistakes.
The numbers don't even show the problem, the unsafe road system leads to less people walking in the US, and somehow still creating a high rate of deaths for people who walk.
https://nyc.streetsblog.org/2022/02/15/excerpt-there-are-no-...
If people really wanted to fix transportation that’s great, high speed rail and public transportation reducing the number of cars on the road seem to be the best solution.
But hey, Elon’s hyper loop was a publicity stunt to discourage investment in that. So I say, whether you want to shit on Tesla or public roads, shit on Elon.
No actually its actually not. Less concession in a system that depends on concession for safety will lead to more accident not less.
That is what was shown during Covid, less driving, but accidents per mile went up.
So yes, of course public transport, bikes are great, but if you don't fix the underlying problem in the road system, you are gone have a whole lot of accidents.
> But hey, Elon’s hyper loop was a publicity stunt to discourage investment in that.
This is a claim some guy has made, not the truth. What is more likely is that Musk actually thinks Hyperloop is great (its his idea after all) and would have wanted investment in it.
> shit on Elon
I prefer not to shit on people most of the time.
Musk is the outcome of a South Africa/American way of thought that is more in line with the US avg then most people who advocate for public transport. That is the sad reality.
And the problem in the US road system or the US bad public transport can 100% not be blamed on him. There are many people with far more responsibility that deserve to be shit on far more.
Would smart roads be expensive? RFID responders seem super cheap compared to how much actual asphalt costs. Authorities are currently unable to remotely control flows, speeds and safety which is completely bonkers.
Yeah its not actually that easy. Go and look into train signaling. And cars are not even able to do coupling.
Making a train system operate like a super-railway with cars is crazy difficult and has never been done before.
Saying this is like saying we should ignore sex offenders in favor of reworking our society. Instead, we should solve both problems rather than bicker over priorities.
And I would say that I'm not proposing to rework society in sociological sense, but rather to throw out the standard engineering standards and replace them with better standards.
And actually it does matter even in the case you suggest. If a car rolls into an intersection, what speed that car is matters. It matters from what points it is clear that the car is out of control. It matters if there are speed bumps or something along those lines that can send strong signals to a driver.
If you have all raised intersection then the top speed of cars will simply be lower and if somebody human or AI makes a mistake that lead to a crash, that crash will be at far lower speed.
And proper road design also leads to less intelligence and fancy car design being required. I rather get hit by a shitty designed unsafe old car at 20mph then a fancy new car at 30mph.
Does anyone have insights on what QA looks like at Tesla for FSD work? Because all of these seem table-stakes before even thinking about releasing the BETA FSD.
Tesla is not exactly in love with QA. Especially for FSD.
FSD is mainly 2 things: 1. (By far most important) shareholder value creating promise, that's been solved for 6 years according to their CEO. 2. Software engineering research project
What FSD is not is a safety critical systems (which it should be). They focus on cool ML stuff and getting features, with any disregard for how to design, build and test safety critical systems. Validation and QA is basically non-existent.
Based on there presentation, they for sure have a whole load of tests, many built directly from real world situation that the car has to handle. They simulate sensor input based on the simulation and check the car does the right thing.
They very likely have some internal test drivers and before the software goes public it goes to the cars of the engineers.
Those are just some of things we know about.
I have no source on their approach to testing safety critical systems, but we do know that they have a lot of software that has based all test by all the major governments. They are one of the few (or only) car maker fully compliant to a number of standards on automated breaking in the US. We have many real world example of videos where other cars would have killed somebody and the Tesla stopped based on image recognition.
So they do clearly have some idea of how to do this stuff.
So when making these claims I would like to know what they are based on. It might very well be true that their processes are insufficient but I would actual know some real data. Part of what a government could do, is forcing car maker to open their QA processes.
Or the government could (should) have its own open test suit that a car needs to be able to handle, but clearly we are not there yet.
They have a set of regression tests they run on new code updates either by feeding in real world data and ensuring the code outputs the expected result, or running the code in simulation.
It does seem worrying that they would miss things like this.
Here’s a talk from Karpathy explaining the system in 2021:
Though I don’t recall if he explains the regression testing in this talk, there’s a few good ones on YouTube.
I used to think that fact was going to delay self-driving cars by a decade or more, because of the potential bad press involved in AI-caused accidents, but then along comes Tesla and enables the damn thing as a beta. I mean...good for them, but I've always wondered if it was going to last.
I've been using it pretty consistently for a few months now (albeit with my foot near the brake at all times). I haven't experienced any of the above. Worst thing I've seen is the car slamming on the breaks on the freeway for...some reason? There was a pile-up in a tunnel caused by exactly that a month or so ago, so I've been careful not to use FSD when I'm being tailgated, or in dense traffic.
Yes. An army of Tesla owners perform the QA, in production.
But in all seriousness they do have some small team that validates then it goes to employees.
Whether it's a neural network inside or not is completely irrelevant. That's why it's called "black box".
Time to stop driving. That is not normal
As for their QA process, in 2018 they had a braking distance problem on the Model 3. They learned of it, implemented a change that alters the safety critical operation of the brakes, then pushed it to production to all Model 3s without doing any rollout testing in less than a week [1]. So, their QA process is probably: compiles, run a few times on the nearby streets (I am pretty sure they do not own a test track as I have never seen a picture of tricked out Teslas doing testing runs at any of their facilities), ship it.
[1] https://www.consumerreports.org/car-safety/tesla-model-3-get...
1. https://finance.yahoo.com/news/upcoming-tesla-software-2020-...
There is even a former Tesla AI engineer that throws objects in front of the car on YouTube, as a demonstration.
The results are not glorious at all :| (trying to find the channel back if someone knows).
And random public tests too: https://www.youtube.com/watch?v=3mnG_Gbxf_w
This is a basic safety auto-braking. Just feels very wrong to even accept it goes into release.
The guy behind this is known to be untrustworthy, and many of the videos don't actually do what he claims. Notably he refused to release the videos that would prove his claims right.
The reality is that Tesla scores high on all the automated breaking test done by government. The driver however can override this, and that is exactly what is being done in this video.
That's what makes it unfinished...
It's never passed the 'drive from new York to LA with nobody touching the controls' test...
The remedy OTA software update will improve how FSD Beta negotiates certain driving maneuvers during the conditions described above, whereas a software release without the remedy does not contain the improvements.
Given how much time and data they had so far, and the state it's in, it really makes news like Zoox getting a testing permit for public roads, without any manual controls in the vehicle, seem crazy irresponsible and dangerous. Is it possible that they are just that much better at cracking autonomous driving?
there are hundreds of videos that directly contradict your comment. the funny thing is that on reddit and on hackernews as well as all mainstream news outlets, i have never, not even once, seen one of these videos posted or even linked to or even referred to. its like they dont exist despite the fact that there are hundreds of them just a click away.
you can say that we shouldnt be experimenting on the roads. thats a matter of opinion. but to say that fsd isnt the most advanced and capable system available, to say that its a fraud, to say that its failed is all objectively false. just look at where they started and look at one of the latest videos.
and in what may be a world first, i will link to a video here on hackernews.
https://www.youtube.com/watch?v=mHadhx3c840&ab_channel=AIDRI...
the same guy, who drives FSD every day, is deeply involved in reporting bugs and publishes videos about FSD. by any metric that matters, his opinion is worth more than yours or anyone else on hackernews or reddit.
https://www.youtube.com/watch?v=Nvvhmc837Tw&ab_channel=AIDRI...
If Tesla wants to take legal responsibility for the car while FSD is engaged they call it full self driving. Until then it's a dangerous beta test pedestrians and other road users didn't sign up for.
I like my tesla, but autopilot drives like a poorly trained teenager and I don't ever use it. FSD isn't much better and needs to be tested with proper rigour by trained employees of Tesla, not random owners.
> Remedy: Tesla will release an over-the-air (OTA) software update, free of charge. Owner notification letters are expected to be mailed by April 15, 2023. Owners may contact Tesla customer service at 1-877-798-3752. Tesla's number for this recall is SB-23-00-001.
[1] https://www.reddit.com/r/teslamotors/comments/113wltl/commen...
Surely the letter could be displayed on screen like most other tech displays some text of sorts before an update occurs.
Are these letters designed to satisfy the legalese types, or is paper still required to make sure tech companies dont make post update changes to the letter contents?
Statistically these are not likely to be Tesla owners, but it's about making sure people know about the issue and how to fix it.
I’ll completely agree that “Full self driving” is a misleading name, and they should be forced to change it, full stop.
That being said, it’s exceptionally clear that all the responsibility is on you, the driver, while you are using it. The messaging in the vehicle is concise and constant (not just the wall of text you read when opting in.) Eye tracking makes sure you are actually engaged and looking at the road, otherwise the car will disengage and you’ll eventually lose the ability to use FSD. Why is there never a mention of this in any coverage? Because it’s more salacious to believe people are asleep at the wheel.
Is it perfect? No, though it’s a lot better than many people seem to want the world to believe. It’s so easy to overlook the fact that human drivers are also very, very far from perfect.
Virtually every study ever done on human-machine interaction shows that users will inevitably lose reaction time and attention when they are engaged with half automated systems given that constant context switching creates extreme issues.
Waymo did studies on this in the earlier days and very quickly came to the conclusion that it's full autonomy or nothing. Comparisons to human performance are nonsensical because machines don't operate like human beings. If factory floor robots had the error rate of a human being you'd find a lot of limbs on the floor. When we interact with autonomous systems that a human can never predict precision needs to far exceed that of a human being for the cooperation to work. A two ton blackbox moving at speeds that kill people is not something any user can responsibly engage with at all.
Problem and solution.
Nothing more need be said.
But the marketing worked. They sold the dream to people and only got sued a couple times and still got the most subsidization of any automaker
A bit of a sensational title compared to what this really is.
The people who keep making this "Recall means go back to the dealer" claim is simply down to them never paying attention to all the recalls in the world that don't make it to their mailbox like car recalls do.
https://www.reuters.com/business/autos-transportation/tesla-...
Sounds like it's just a patch release with regulators involved.
But the "[traveling] through intersections in an unlawful or unpredictable manner" is inherent to the FSD beta. Most of the time it does "fine", but there's some intersections where it will inexplicably swerve into different lanes and then correct itself. And this can change beta-to-beta (one in particular used to be bad, it got fixed at some point, then went back to the old swerving behavior).
I can understand why people might think that all these recalls require going back to the shop, that's how most legacy makers work to this day.
Non-Tesla automakers are not "legacy".
We have to remember that Tesla was the first company to really do this and even today almost no other company does it. Most can update some parts of their system, but almost non have anywhere close to the integration Tesla has.
So for 99% of recalls, it is a physical recall, its just Tesla where most of the time it isn't.
On Page 4 it mentions the "Description of Remedy" as an OTA update. Gives a new meaning to a recall!
https://howtune.com/recalls/ford/ford/1980/
Software > Stickers
Not nothing, but not as big as CNBC is making it out to be.
I worked at a Oldsmobile dealer in the 80s and fixed all kinds of issues on cars that were "recalled" and that is what we called it way back then and long before it. Some were trivial and others were serious safety issues.
https://www.kbb.com/car-advice/what-do-i-need-to-know-about-...
I will not stay behind or next to a Tesla if I can avoid it. I'll avoid being in front of one if the distance is such that I cannot react if the thing decides to suddenly accelerate or, while stopping, not break enough or at all.
In other words, I have no interest in risking my life and that of my family based on decisions made by both Tesla drivers (engaging drive-assist while not paying attention, sleeping, etc.) or Tesla engineering.
Will this sentiment change? Over time. Sure. If we do the right things. My gut feeling is program similar to crash testing safety will need to be instituted at some point.
A qualified government agency needs to come-up with a serious "torture" test for self-driving cars. Cars must pass a range of required scenario response requirements. Cars will need to be graded based on the result of running the test suite. And, of course, the test suite needs to include an evaluation of scenario response under various failure modes (sensor damage, impairment, disablement and computing system issues).
I am not for greatly expanded government regulation over everything in our lives. However, something like this would, in my opinion, more than justify it. This isn't much different from aircraft and aircraft system certification or medical device testing and licensing.
...the feature could potentially infringe upon local traffic *laws or customs* while executing certain driving maneuvers...
Do they want Tesla to create a DB with 'allowed' and 'locally faux pas' driving maneuvers? It sure reads like they do.If so, that's pretty crazy and people will die because of this decision.
The usual comparison is vs. all cars on the road, but Teslas are comparatively new cars with a lot of recent and expensive safety features not available in older cars. They’re also likely to be maintained better. On top, they’re also a driven by a different demographic which skews accident statistics.
Teslas autopilot can only be enabled in comparatively safe and simple circumstances, yet the comparison is made against all driver in all situations. When autopilot detects a situation it can’t handle, it turns off and hands over to the human who gets a few seconds warning. Human drivers can’t just punt the issue and then crash.
Tesla FSD may be safer than Human drivers for the limited set of environments where you can use it, but last time I checked, the numbers that Tesla published are useless to demonstrate that.
This is incorrect. FSD can be enabled everywhere from dirt road to parking lots, even highways and dense urban environments like NYC. There is no geofence on FSD, you can turn it on anywhere the car can see drivable space.
The primary one is:
1) FSD turns itself off whenever things get too difficult or complex for it. Human drivers don’t get to do this. Recent crowdsourced data suggests a disengagment every 6 miles of driving: https://twitter.com/TaylorOgan/status/1602774341602639872
If you eliminate all the difficult bits of driving I bet you could eliminate a lot of human driver crashes from your statistics too!
A secondary issue, but relevant if you care about honesty in statistical comparisons is
2) The average vehicle & driver in the non-Tesla population is completely different. Unless you correct for these differences somehow, any comparison you might make is suspect.
(maybe people just don't think through ramifications)
I realize that not everybody is in agreement, but I personally use the FSD beta while remaining fully in control of the vehicle. I steer with it, I watch the road conditions, I check my blind spots when it is changing lanes, I hit the accelerator if it is slowing unexpectedly, I hit the brakes if it is not...
You know, basically behaving exactly as the terms you have to agree to in order to use the FSD beta say you are going to behave.
When I look at the wreck in the tunnel (in San Francisco?) a few months ago, my first thought is: how did the driver allow that car to come to a full stop in that situation? Seriously, you are on a highway and your car pulls over half way out of the lane and gradually slows to a complete stop. Even if you were on your phone, you'd feel the unexpected deceleration, a quick glance would show that there was no obstruction, and the car further slowed to a complete stop.
FSD is terrible in many situations, that is absolutely true. But, knowing the limitations, it can also provide some great advantages. In heavy interstate traffic, for example, I'll usually enable it and then tap the signal to have it do a lane change: I'll check my blind spots and mirrors, look for closing traffic behind me, but it's very nice to have the car double checking that nobody is in my blind spot. There are many situations where, knowing the limitations, the Tesla system can help.
Good for you that you apparently know how to "use it correctly" or whatever but that's not exactly the point here.
In all honesty, I'd probably spend a few seconds trying to figure out "what does the car see that I don't?" and let it come to a stop. Maybe it's a catastrophic system failure that I can't see. Maybe it's an obstacle headed into my path that hasn't gotten my attention. If my reflexive reaction is supposed to be to distrust the car's own assessment of the situation, then the system isn't good enough yet.
By crashing in a way that is fatal to your life
"could kill you if gone wrong" is nearly every choice in life. The relevant metric is risk of death or other bad outcomes.
Flying in a commercial airliner could kill you if things go wrong. Turns out it's safer than driving the same distance.
Eating food could kill you if things go wrong (severe allergy, choking, poisoning) yet it's far preferable to not eating food.
Similarly, Tesla's ADAS could kill you if things go wrong but the accident rate (as of 2022) is 8x lower than the average car on the road, and 2.5x lower than a Tesla without ADAS engaged.
Don't let the perfect be the enemy of the good.
Would you like to participate in the beta program, to test stoping bullets being shot at you?
― George Carlin
To be fair, though, Tesla has no sensors other than cameras, and I believe the Mercedes has a half dozen or so, several radars and even a lidar.
This doesn't mitigate Tesla's gross ethical violations in letting this thing loose, it makes them worse. Tesla knew that cameras alone would be harder to make work than cameras plus radar plus lidar, and they shouldn't be lauded for attempting FSD with just cameras. It's an arbitrary constraint that they imposed on themselves that is putting people's lives in danger.
Edit (because HN throttling replies): > Mercedes is being responsible about their rollout
You can always succeed if you define success. If rolling out something that doesn't do much of anything is success, success? Oh well. Less than 20 people have died related to Autopilot in ~5 billion miles of travel. Zero deaths is unattainable and unrealistic, so "responsible" and "safe" is based on whatever narrative is being pushed. 43k people died in US car accidents last year, roughly half a million deaths in the entire time Tesla has offered some sort of driver assistance system called Autopilot.
How many deaths would be be okay with Mercedes' system where is would still be "responsible"? Because it won't be zero. Even assuming 100% penetration of Automatic Emergency Braking systems, it's only predicted to reduce fatalities by 13.2% (China specific study), and injuries by a similar number.
TLDR "Safe" is not zero deaths. It is a level of death we are comfortable with to continue to use vehicles to travel at scale.
I would not be surprised to see Mercedes beat Tesla to safe fully automated driving because they took it slow and steady instead of trying to rush it out to legacy vehicles that don't have the right sensor arrays.
Edit: It's weird to reply to a comment by editing your own. It feels like you're trying to preempt what your interlocutor is saying, rather than reply.
Do you have any information that it'll continue to be 40mph even where not required to be?
I wouldn't be surprised in the answer is yes, but I've been paying attention and haven't come across any yet.
If Tesla believes that their feature is safe, then let them take legal liability for it.
Otherwise, not really.
Neither is true, both are marketing.
> but I don't think there's any way to say that for sure
I mean yes there is. If you want to have an object test where you drop a car anywhere in the world and see if it can get somewhere else, then clearly one is more useful then the other.
In basketball they say 'the best ability is availability'. In terms of that AP is in a totally different dimension. AP has been driven for 100s of millions of miles by know, it must be a crazy high number by now. Mercedes L3 system has barley driven at all, its available in very few cars.
The only way you can reasonably compare the Mercedes L3 system is if you limit the comparison to the extremely limited cases where the Mercedes L3 system is available. If you compare them there, I would think they aren't that different.
> otherwise Tesla would release an L3 capable car too
No, because making something L3 in a extremely limited selection of places is simply not something Tesla is interested in doing. Doing so would be a lot of work that they simply don't consider worth doing when they are trying to solve the larger problem.
I am unsure what"allowed" means in that context. They just did it. The lawsuits are coming, surely?
Sometimes it is better to ask for forgiveness than for permission. This was an audacious example. Time will tell if there is anything that will stop them.
Then there's the issue of sensor fusion. Lidar requires sensor fusion because it can't see road lines, traffic signals, or signs. It also can't see lights on vehicles or pedestrians. So you still have to solve most of the computer vision issues and you have to build software that can reliably merge the data from both sensors. What if the sensors disagree? If you err on the side of caution and brake if either sensor detects an object, you get lots of false positives and phantom braking (increasing the chance of rear-end collisions). If you YOLO it, you might hit something. If you improve the software to the point that lidar and cameras never disagree, well then what do you need lidar for?
I think lidar will become more prevalent, and I wouldn't be surprised if Tesla added it to their vehicles in the future. But the primary sensor will always be cameras.
Camera modules are cheap and available in huge quantities.
Pf. Tensors beat sensors.
Just you wait for another garbanzillion miles driven. Then you'll see.
/s
Tesla software is used far, far, far more, in far, far more situation. Even compare those to things is kind of silly.
Its like comparing a system designed for only race tracks with Honda Civic. They are simply not designed for the same thing.
If Mercedes achieves L3 in all the places Tesla now allows AP (or FSD Beta) then that would prove the 'bet' on vision wrong.
Until then, nobody has proven anything.
Even a fruit-fly class AGI would do it.
Edit: Dang, you're all right, they could eat this and still be alive.
My whole sentiment comes from FSD being their big shot (and everything that comes with that, like the robotaxis and whatnot). Without FSD, they're "just another car company" and the market is already thriving with good alternatives (Audi's EVs are jaw dropping, at least for me). Excited to see what they announce on March 1st, though.
https://electrek.co/2023/02/15/tesla-self-driving-hw4-comput...
So much for the "full self-driving" fantasy all those people paid for but will not get.
It's technically a recall, but it's fixed with an OTA update. But the fact that any "defect" that can be fixed with an OTA update is called a "recall" is confusing to consumers and contributes to media sensationalism.
There absolutely needs to be a process overseen by regulators for car manufacturers to address software-based defects, but the name would benefit from being changed to reflect that it can be done without physically recalling the vehicle to the dealer.
Would it be sensationalism if the same recall happened to cars not capable of the OTA update?
Because I think there should be media "sensationalism" about these types of issues, regardless of whether they can be fixed with physical or OTA repairs.
> But the fact that any "defect" that can be fixed with an OTA update is called a "recall" is confusing to consumers
What is confusing about this to you?
> contributes to media sensationalism
Why do you think this recall is more sensationalistic than many of the other recalls issued recently? Automotive recalls often address serious issues with cars. "Fix this or you could die" is a common enough theme when, you know, if you don't fix it you could die.
> but the name would benefit from being changed to reflect that it can be done without physically recalling the vehicle to the dealer.
Why? Information related to how, when, and where the recall can be addressed is contained in the text of the recall notice, same as it ever was.
Its crazy that Tesla has been doing OTA for 10+ years and today many cars are released that are not capable of being upgraded.
And even those few cars that do support OTA only support it for a very limited amount of system. Often they still need to go to the shops because lots of software lives on chips and sub-components that can't be upgraded.
Their CEO has claimed "it will be ready next year" for literally 8 years now. How much more bullshit is he going to sell?
V11 supposedly uses neural nets for deciding the driving path and speed (rather than hand coded C++ rules).
Jokes aside, it's gotta be damn tough to QA a system like this in any sort of way that resembles full coverage. Can't even really define what full test coverage means.
A autonomous Tesla driving into a group of people or swerving into oncoming traffic is potentially killing other people.
edit: looks like no.. the largest was 578607 cars... also by Tesla lol
Toyota, 2.9M: https://www.consumerreports.org/car-recalls-defects/toyota-r...
Ford, 21M: https://247wallst.com/autos/2021/07/24/this-is-the-largest-c...
Takata airbags, possibly 100M+: https://www.carvertical.com/blog/top-10-worst-car-recalls-in...
> Yet even after the company declared bankruptcy in 2017, the Takata recall kept on giving. 65-70 million vehicles with faulty Takata airbags were recalled by the end of 2019, with approx. 42 million still to be recalled.
Just to reiterate; it’s a software update. You can FUD as much as you want.
This morning I tried to turn it on, and the car immediately veered left into the oncoming lane on a straight, 2 lane road. Fortunately, there were no other vehicles nearby.
I immediately turned it off in the settings, and have no intention of re-enabling.
I understand the simpler lane keeping system is okay, but I don’t want to trust any system like this from Tesla given their track record with FSD.
This is the kind of feature I will use when the car and software in question has been battle tested for *years* with objectively excellent results.
FSD doesn't need to be perfect to avoid accidents. In fact, if it was too perfect I'm afraid I could become very inattentive at the wheel.
The regular, included autopilot works pretty well as a smarter cruise control; as long as you use it on a major highway in the daytime in good conditions it does a good job. And it's very useful in stop-and-go traffic on the highway.
But the FSD is crap.
Fault, distracted, intoxicated human beings are allowed on public roads. The "legal limits" for blood alcohol levels aren't 0.00%.
That’s the only way it could be released in its current state.
Looking at it as anything other than a beta where you have to maintain control is misunderstanding what it is. Which you are clearly doing. It is absolutely expected to be worse than a drunk teenager in this stage.
At this point, it's looking pretty far off, especially for Tesla. Even Cruise and Waymo are having issues and they have far better sensors than Teslas. It seems silly to be paying $15,000 for something that realistically might not happen during my ownership of the vehicle.
Even if it does happen, I can purchase it later. Sure, it's cheaper to purchase with the car because Tesla wants the money now. However, I'd rather hedge my bets and with investment gains on that $15,000 it might not really cost more if it actually works in the future. 5 years at 9% becomes $23,000 and I don't think Tesla will be charging more than $25,000 for the feature as an upgrade (though I could be wrong). If we're talking 10 years, that $15,000 grows to $35,500 and I can potentially buy a new car with that money.
Plus, there's the genuine possibility that the hardware simply won't support full self driving ever. Cruise and Waymo are both betting that better sensors will be needed than what Tesla has. Never mind the sensors, increased processing power in the future might be important. If current Teslas don't have the sensors or hardware ultimately needed, one has paid $15,000 for something that won't happen. Maybe we will have self driving vehicles and the current Teslas simply won't be able to do that even if you paid the $15,000.
It just seems like a purchase that isn't prudent at this point in time. The excitement has worn off and full self driving doesn't seem imminent. Maybe it will be here in a decade, but it seems to make a lot more sense to save that $15,000 (and let it grow) waiting for it to be a reality than paying for it today.
It's just too damn glitchy to be worth thousands of dollars extra.
It's worth pointing this out.
I have FSD, I use it every single day. I love it. If every car on the road had this the road would be a substantially safer place.
I’m actually afraid to drive behind a Tesla and either keep extra distance or change lanes if possible. I still have more faith in humans to not randomly brake them in beta FSD.
It’s one thing to put your own life in the hands of this beta model, and it’s another to endanger the life and property of others.
It's nice on the freeway though.
The problem with systems where 99% is a failing grade, is that 99% of the time, they work great. The other 1% you die. No one is against FSD or safer tech. They are against Tesla's haphazard 2d vision-first FSD.
Wanna hear about this new fangled tech that avoids 100% of accidents, has fully-coordinated swarm robotics between all the cars, never swerves and can operate in all weather + lighting conditions ? It's called a street car.
The words 'road' and 'safety' should never be uttered in the the same sentence.
But the company is run by an asshole who people love to hate, so... everything it does is Maximally Wrong. There's no space for reasoned discourse like "FSD isn't finished but it's very safe as deployed and amazingly fun". It's all about murder and death and moral absolutism. The internet is a terrible place, but I guess we knew that already. At least in the meantime we have a fun car to play with.
I think a lot of "people" online are paid shills.
HN should severely downrank any "opinion" posts from anon accounts.
I have a Tesla with FSD and it's incredible. Though it drives way worse than me, it drives pretty darn good, and will save a ton of lives and change the world by freeing up a lot of time for people to do more important things.
Even if Tesla’s current implementation were objectively safer than the average human driver, it would still represent a net negative on safety because of the negative impact the glaring flaws will have on peoples’ confidence in self driving tech.
Obviously attention should be drawn to the fact that there is a critical safety update being pushed OTA, but "recall" is too overloaded a term if it means both "we're taking this back and destroying it because it's fundamentally flawed" vs. "a software update is being pushed to your vehicle (which may or may not be fundamentally flawed...)"
I do think something beyond "software update" is necessary, though - these aren't your typical "bug fixes and improvements" type release notes that accompany many app software releases these days. I don't think it would be too difficult to come up with appropriate language. "Critical Safety Update"?
How many times in history has a vehicle recall meant the cars were returned and destroyed?
What makes this situation any more confusing than all the previous times vehicles were recalled?
In the current world of forced updates (looking at you Android), the word "update" itself is kind of toxic, and doesn't (and I would argue cant) represent what has happened here, even if its technically more correct.
What was that called again?
/puts on sunglasses preemptively/
https://idlewords.com/2023/1/why_not_mars.htm
I would love to go there, even if it meant I died the second I step foot on it. Getting back would be the last thing on my mind.
https://www.mcsweeneys.net/articles/my-name-is-elon-musk-and...
It is not so much software, as well as <other people on the road> problem.