Based on there presentation, they for sure have a whole load of tests, many built directly from real world situation that the car has to handle. They simulate sensor input based on the simulation and check the car does the right thing.
They very likely have some internal test drivers and before the software goes public it goes to the cars of the engineers.
Those are just some of things we know about.
I have no source on their approach to testing safety critical systems, but we do know that they have a lot of software that has based all test by all the major governments. They are one of the few (or only) car maker fully compliant to a number of standards on automated breaking in the US. We have many real world example of videos where other cars would have killed somebody and the Tesla stopped based on image recognition.
So they do clearly have some idea of how to do this stuff.
So when making these claims I would like to know what they are based on. It might very well be true that their processes are insufficient but I would actual know some real data. Part of what a government could do, is forcing car maker to open their QA processes.
Or the government could (should) have its own open test suit that a car needs to be able to handle, but clearly we are not there yet.
1. I know people working at Tesla.
2. Much more important one - Elon's Twitter feed. They're doing last minute changes, and once it compiles and passes some automated tests, it's tested internally only over few days before it's released to the customers. Even if they had world class internal testing (they don't), for something having to work in such a diverse environment like self driving system without any geo-fencing, those timelines are all you need to know.
That's why I bought/will keep buying Toyota/Lexus.
https://www.euroncap.com/en/results/tesla/model+y/46618
Same for NHTSA:
https://www.nhtsa.gov/vehicle/2022/TESLA/MODEL%2525203/4%252...
For example, did you predict, based on the speculation of Tesla being incompetent with regard to safety, that they have the lowest probability of injury scores of any car manufacturer? Because they do.
Did you predict, based on speculation about Elon Musk's incompetence in predicting that self-driving would happen, that there are millions of self-driving miles each quarter? Because there are.
Did you predict, based on speculation about Tesla incompetence in full self-driving, that the probability of accident per mile is lower rather than higher in cars that have self-driving capabilities? Because they do.
I know this sort of view is very controversial on Hacker News, but I still think it is worth stating, because I think people are actually advocating for policies which kill people because they don't actually know the data disagrees with their assumptions.
Also, none of that is self driving. This data talks about AP, not FSD. FSD is also not self driving by any means (it's level 2 driver assist), but that's a detail at this point.
For example, elsewhere in this comment thread, someone threw out a random statistic of 400:1 as part of their argument, but this seems to me to be something like six orders of magnitude diverged from a data informed estimate.
To try and contextualize how big an error that is - it is like thinking that a house in the Bay Area has the same cost as a soft drink.
I think if we have to cite our data we are less likely to do that sort of error and more likely to catch it when it is done.
I definitely don't think FSD is magically safe. So if you think that is what I'm trying to say, please update your beliefs according to my correction that I do not believe this. I think anyone driving in FSD should remain vigilant, because it can make worse decisions than a human would.
The probability of an accident for any driver assistance system will ALWAYS be lower than a human driver - but that doesn't mean the system is safe for use with the general public!
People like me are not advocating for "killing people" because we aren't looking at data - it's that no company has the right to make these tradeoffs without the permission and consent of the public.
Also if this was about safety and not just a bunch of dudes who think they are cool because their Tesla can kinda drive itself, why does "FSD" cost $16,000?
If you are advocating against a system that protects 400 people and kills one, you are advocating for killing people.
Totally we should be wary of a system that protects 400 and kills 1. Thank you for providing the numbers. It helps me show my point more clearly.
If you are driving on a road you encounter cars. Each car is a potential accident risk. You probably encounter a few hundred cars after ten or so miles. Not every car crash kills, but lets just assume they all do to make this simpler. For the stat you propose, you are talking about feeling uncomfortable with an accident per mile of something around the ballpark of ten miles.
Now lets look at the data. The data suggests the actual miles per accident is closer to 6,000,000 miles per accident. This is six orders of magnitude diverged from the number of miles per accident that you imply would make you feel uncomfortable.
Lets try shifting that around to a context people are more familiar with: a one dollar purchase would be a soft drink and a six million dollar purchase would be something like buying a house in the bay area. This is a pretty big difference I think. I feel very differently about buying a soft drink versus buying a house in the Bay Area. If someone told me they felt that buying a house was cheap, then gave a proposed price for the house that was more comparable to the cost of buying a soft drink, I might suspect they should check the dataset to get a better estimate of the housing prices, because it might give them a more reasonable estimate.
So I very strongly feel we should cite the numbers we use. For example, I feel like you should really try and back up the use of the 400 to 1 number so I understand why you feel that is a reasonable number, because I do not feel that it is a reasonable number.
> Also if this was about safety and not just a bunch of dudes who think they are cool because their Tesla can kinda drive itself, why does "FSD" cost $16,000?
Uh, we are a on venture capitalist adjacent forum. You obviously know. But... well, the price of FSD is tuned to ensure the company is profitable despite the expense of creating it as is common in capitalist economies with healthy companies seeking to make a profit in exchange for providing value. It is actually pretty common for high effort value creation, like creation of a self-driving car or the performance of surgery, for the prices to be higher.
1) those are statistics for the old version, the new version might be completely different. I've had enough one-line fixes break entire features I was not aware of that my view is that any change invalidates all the tests. (Including the tests that Tesla should have but doesn't) Now probably a given update does not cause changes outside its local area, but I can't rely on that until it's been tested.
2) the self-driving is presumably preferentially enabled for highway driving, which I assume has fewer accidents per mile than city driving, so comparing FSD miles to all miles is probably not statistically valid.
Just for context - I've been in a self-driving vehicle. Anecdotally, someone slammed on the breaks. The car stopped for me, but I was shocked: for hours before this the traffic hadn't changed, it was a cross country trip. I think I would have probably gotten in an accident there. Also anecdotally, there are times where I felt the car was not driving properly. So I took over. I think it could have gotten into an accident. Basically, for me, the best explanation I have for the data I've seen right now is that human + self-driving is currently better than human and currently better than self-driving. The interesting thing about this explanation is how well it tracks with other times where we've technology like this before. In chess playing for example, there was a period before complete AI supremacy (which is what we have now) where human + AI was better than AI.
I like the idea of being safe, so if the evidence goes the other way, advocating for only humans or only AI doing the driving, I want to follow that evidence. Right now I think it shows the mixed strategy is best and that is kind of nice to me because it implies that the policy that best collects data to reduce future accidents through learning happens to be the policy that is currently being used. I like that.
(Is Autopilot still limited to divided, limited access highways? Those are significantly safer than other roadways.)
No. Was it ever? All you need is a piece of road that has something which appears to be lane lines. The road to my house is usable despite having no actual paint striping because it happens to have a crack that runs fairly straight up one side and was filled with tar. So the camera thinks it's a lane line. Ta-da!
The thing is we often have discussions about this stuff and I'm trying to advocate for citing datasets to more tightly correlate our words with the evidence that our words correspond to. I'm not trying to say this version shouldn't have been recalled for example, but that I think we should be close to evidence.
In the case of auto-pilot, it was the case that people made the same arguments that are now being made against FSD. I think that makes it somewhat relevant to the discussion, because people previously also made the same claims about safety, but now that we have the data, we can see those claims were wrong. I believe these sort of generalizations, though inaccurate, can help us to make more informed decisions, but I'm not really confident in any beliefs that are made at this greater decision from direct data.
So I think anyone who can provide datasets which correspond with FSD performance rather than autopilot performance ought to do so. That would be really great data to reflect on.
The thing I'm worried about is that no data at all is backing the conjectures - which, given that I sometimes see estimates that I calculate to be many orders of magnitude away from data informed estimates - seems to be the case on Hacker News at least some of the time.
I think you and I must've watched a different video.
Yeah, they have QA. But for the problem they claim they’re solving (robotaxis) and speed of pushing stuff to customers (on the order of days) it vastly, vastly insufficient. And it lacks any safety lifecycle process regards - again, just look at the timelines. Even if you’re super efficient, you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation.
completely demonstrably false
> speed of pushing stuff to customers (on the order of days)
this is also false and doesn't happen
> you cannot possibly claim you can even such a basic things like proper change management (no, commit message isn’t that) or validation.
you know absolutely nothing about the internal timelines of developments and deployments at tesla and to suggest it's impossible without that knowledge is just dishonest
well, if you don't get the software pushed to the QA team (the customers), how else are they going to get it tested?