And lidar wouldn't be expensive if manufactured in automotive volumes. Certainly less, per vehicle, than Musk charges people for "full self driving" at the moment.
California allows autonomous vehicles to be tested on the road, so long as every disengagement is reported (along with total miles driven etc). Waymo is testing, reporting mileage and disengagements. So are Toyota, Nvidia, Mercedes, BMW, Cruise, Lyft and Apple.
Guess who's too shy to have driven a single autonomous mile in California, where faults have to be reported? That's right, Tesla!
Tesla might be able to make vision-only driving work. But Musk has been promising deadlines then failing to achieve them for years. They've put all their chips on 'no lidar' and they've had a bunch of problems that lidar could trivially solve - such as detecting a fire truck or concrete barrier right in front of the vehicle. So it's far from obvious to me that they've got a winning approach.
Apparently they have neither as they have missed their deadline three years ago and continue to miss it every year since.
> All the other players try to solve this with lidar and cars that cost around 500k to build
Citation needed?
> Tesla may need another 10 years
What has you this pessimistic? Tesla promises full self driving by the end of the year every year. Are you saying a random commenter on the internet knows more about the state of their AI then they do?
> This approach will never solve L5.
Again, Tesla advertised FSD as Level 5 and ready for completion with the robotaxis for 2020. Sounds like it was falsely advertised right?
“they have pretty much 0 data except for the maps they generate themselves”
What do you mean by this? You realize the bottleneck for training data generation is always human labeling, not raw amount of data, right?
Comma has been using that approach from the start with a cheap smartphone-like device.
No amount of "training" can fix the problem of "AI" not being AI
These people have very poor idea what they are talking about when they say the phrase "artificial intelligence." It's a clear misuse
Can someone please help me across a conceptual bridge here?
Is there some work I'm not familiar with that shows humans use the biologically-equivalent NNs used by Tesla to accomplish L5-grade driving? I'm not talking about doing it quickly, I'm at this point interested in Tesla or anyone else for that matter demonstrating doing it at all, at any speed. It can be at an agonizingly-slow 0.25 km/hour and that would be fine.
I'm having trouble bridging between L5-the-destination and NNs-are-definitely-the-way-to-get-there. This sounds an awful lot like saying NNs-are-the-Moravec's-Paradox-solution, and I'm not sure I've read conclusively how that can be true. I can accept it as a hypothesis, but other than actually trying it out like Tesla is doing, I haven't read why it is such a strong conjecture.
It sounds from articles like [1] and [2] Tesla is only just now starting to really get into applying NNs more broadly to the problem space, and the prior years were mostly focusing on more conventional machine vision techniques and getting clean data for NNs to ingest. But I've yet to read a convincing explanation for how ML will functionally solve even the subset of Moravec's Paradox needed to accomplish L5. I grant that it will solve a facsimile of the paradox, but I feel it is arguable if it will be a reasonable facsimile. That sounds an awful lot like, "we'll brute force throw enough training data at it to reach 'reasonable facsimile' level", and I'm cautious when I hear of brute forcing as a strategy for arriving at R&D results.
[1] https://insideevs.com/news/466239/tesla-migrating-to-neural-...
[2] https://electrek.co/2021/02/08/tesla-looks-hire-data-labeler...