If that's the case, Tesla will probably figure this out as well.
Tesla's plan is (or has become) to do an end-run around all that, and just train a giant network on camera-only sensor stacks, so that it can navigate without large 3D representations of the environment / city in which it works, without expensive lidar/radar sensor suites, and to skip the "partner" phase that Waymo and others do with particular cities.
This allowed them to bring me, a MN customer, something like lvl 3 autonomy before any other company did. But it might not have the same upper-bound as other, more fine-tuned approaches do, and having ridden in Waymo, Nuro, etc vs my own Tesla, I can tell you the Tesla is wonkier for it. Time will tell.
I'm quite sure Mercedes-Benz was the first to bring lvl 3 autonomy on the market.
https://arstechnica.com/cars/2023/09/mercedes-benzs-level-3-...
It is also the only carmaker confident enough in the system that it takes full liability over it
> Confidence in Drive Pilot is high within Mercedes-Benz, as the system has been active in Germany for over a year without incident. That confidence is demonstrated by Mercedes’ decision to assume liability for the vehicle while Drive Pilot is in use. That’s a particularly bold move since no other manufacturer offers that kind of assurance.
I'm quite confident that lvl3 autonomy is becoming widespread, regardless.
Besides, I'm pretty sure some degree of mapping is necessary - I know some seriously wonky roads with poor visibility, tons of shoulder lanes, roundabouts, and stop-and-go traffic, where I need to know which lane to get in half a kilometer before the turn comes up.
Most people can't figure it out at the first glance - I usually see a couple trying and failing every day.
I don't see how Tesla is even a serious contender.
It's possible that there exists some error metric inside Tesla that consistently goes down with more training and bigger neural nets in their Vision FSD - whereas switching to LIDAR would reduce that error by a fixed 30%.
They just assume that vision will eventually work out.
Apple Watch is probably one of the greatest examples. So many of it's features are inferred via "basic" sensors.
On a different angle, sports refereeing is largely becoming possible due to advances in camera based analysis. We can turn 2d images into a nearly centimeter accurate representation of a playing field in seconds.
These cameras are in a very different and much less dynamic environment than on a road speeding at 100+ km/h while getting splashed on, shat on, dusted on, muddied, stroke by bugs, snowed, etc.
Starting with "basic" sensors is backwards. It is like aspiring to become a chess grandmaster so good you can play with your eyes closed, and starting out as a beginner with your eyes closed.
Whether this is correct for delivering self driving cars, we will find out soon enough. Long term though, it definitely makes sense. We just don't know what the missing pieces of the puzzle are.
this is commonly repeated but very obviously untrue.
We don't only have vision. We have a general intelligence, coupled with vision. In the absence of AGI, the base assumption has to be the sensor apparatus needs to be significantly superior to humans for an FSD system to drive at a comparable level.
But they don't. I can't see how anyone could look at modern driving and see an optimal state. Driving isn't being managed at all, it's killing droves of humans.
If we put the same restrictions on airplanes (flying by instrument is a crutch), everyone would rightfully find that ridiculous.
They appear to have bet on the wrong technology. The failure happened back in the design phase.
Spend a few million years programming a computer to swing through trees and they'll probably get something that can drive a car.