Try traffic in India [1][2] If it were a game, then India will be the last boss fight in ultra-hard mode.
I used to bike in that. Not actually quite a bad as it looks but I think it would confuse the FSD.
There are these videos by this HUGE Tesla fanboy, just look here
https://www.youtube.com/watch?v=vo_hC84OSwg
where at 8:20 it fails to properly deal with a cop car and the fanboy complains that the cop did not obey traffic laws (good luck arguing that when you make him do an emergency stop...).
It's even worse in
https://www.youtube.com/watch?v=vL9pKytV94I
FSD consistently has problems with pedestrians in NY, who of course will just walk also on a red light if you hesitate during turning. US people might consider this "chaos", but it's perfectly normal behavior in many countries, where a red light for pedestrians is more a suggestion than a rule...
Not only is self driving a mess on just a road level in a city like Deli or Rome, when they actually get to the numbers where communication with humans becomes vital because they're not just a blip in the traffic you're in for a whole other hell of problems to solve.
Also, even in rule-abiding Germany, some traffic laws are seen more relaxed than others. For instance, the law says that at a stop sign, you need to come to a full stop, but you will see that most drivers don't do that but just drive very slowly (if that...). Likewise, the law says to keep 1,50m distance from bikes and motorcycles, but in the above mentioned narrow streets, that would also often mean that you will not move at all for a long time. I would guess that FSD would need to abide by traffic laws just for regulatory purposes, and that would make you look like an idiot in many cities...
That's not even to speak of countries where traffic laws in general are more like suggestions...
There are people who paid thousands to tens-of-thousands of dollars for a promise that their car would eventually have "full self driving".
A not-insignificant portion of those people have lost any chance at actually getting FSD, whether by their car being totaled in an accident, or having sold it, or such, without ever seeing working FSD.
As far as I know there's no way to get a refund, and people very obviously didn't get "full self driving" as advertised by Elon, so it really does seem like some people paid for a promise that turned out to be nothing... which sounds like lawsuit material to me.
There have also been multiple public promises that FSD would be delivered in a matter of years (like in 2016, the promise that "by the end of next year, FSD will take you across the country safely while you sleep").
If anyone bought FSD due to believing those promised timelines, that also seems like it would be a pretty strong case for a refund to me.
> Lin rejected Tesla's argument that LoSavio should have known earlier. "Although Tesla contends that it should have been obvious to LoSavio that his car needed lidar to self-drive and that his car did not have it, LoSavio plausibly alleges that he reasonably believed Tesla's claims that it could achieve self-driving with the car's existing hardware and that, if he diligently brought his car in for the required updates, the car would soon achieve the promised results," Lin wrote.
EDIT: found the court document; the quote is in the last paragraph on page 5: https://regmedia.co.uk/2024/05/16/teslaamendedcomplaint.pdf
[0] https://arstechnica.com/tech-policy/2024/05/tesla-must-face-...
This is a fun read, even if it only goes up to 2021:
https://teslamotorsclub.com/tmc/threads/fsd-timeline-promise...
sorry other people are having different experiences, but FSD is a significant quality of life improvement for me. Nothing is nicer than getting in the car after a long day and letting it chauffeur me home. (yes I still pay attention)
Also you’re not personally responsible if your chauffeur has an accident.
The Elon miracle, he scams you, but you still defend him and pretend like you’re happy with the scam. "In less than a year you’ll be able to get from New York to San Francisco during your sleep". That was in 2016. 8 years later, he says "robotaxi" and people still believe him.
It’s a world scale Stockholm syndrome.
It's much more comfortable not to have to micromanage the exact position of the car at every moment, as a manual driver does
I think your "automatic driving" argument is wishful thinking
And yet most of us like cruise control and lane keeping.
The latest FSD is pretty damn good.
And that was in the first 3 of 3 tests.
As the author of this blog post: I drive a Tesla, I tried FSD, and it failed miserably in near perfect conditions. How exactly is that derangement?
I mean this one: https://www.washingtonpost.com/technology/interactive/2023/t...
As a concrete example, it routinely switches out of the current lane to “follow route” and then immediately slams on the brakes to slow down, when no turn or exit is present. Then it turns on the signal and dutifully tries to get back into the previous lane. On screen, it says it’s wanting to change into a faster lane. This will happen multiple times on the same road. This is on CA-57 northbound in SoCal —- an area where I would expect there to be pretty good testing.
Personally, AP / navigate-on-autopilot is still superior. On city streets, sure, FSD can sort-of manage. But it isn’t trustworthy enough for me to use it.
I'd have expected things like that to roll out on freeways first and later come to city streets, because freeways should be an easier case since you don't have oncoming traffic, cross traffic, pedestrians, bicycles, and traffic lights to deal with.
Tesla published a chart of cumulative miles driven over time on page 8 of their latest quarterly update:
https://digitalassets.tesla.com/tesla-contents/image/upload/...
So far, I'm not sure what to make of it. The chart looks somewhat exponential. But since they brought more and more cars onto the road, one would expect exponential growth in this chart even if no additional usage was caused by software improvements. So one would have to un-cumulate the chart and then divide the y-axis by number of cars on the road to get usage by car.
And then one would have to factor in price changes.
I think the spike at the right is from the free trial they pushed out shortly after the release of FSD 12. The way I read the chart, the release of FSD 12 itself has not caused an increase in usage.
Does anybody know what caused the increase in growth in March 2023?
It’s also not an accident that the trial went live shortly before Q2 numbers. Eager to see how the graph looks like in the next report (or if it is omitted completely).
I’ve seen some pretty impressive videos on this channel: https://youtube.com/@aidrivrclips?si=dYOXOhn5cdGDpWmh
Musk claims that Tesla will announce fully autonomous driverless taxi mode on August 8th.
That's "announce", not "ship". If this were anywhere near close to happening, there would be test vehicles all over the place, like Waymo and Cruise. There would be press reports. In reality, Tesla has an autonomous vehicle test license in California from the DMV and reports zero miles driven.
I wish I had a cult that would let me get away with stuff like this.
This is where they are as of April 2024: https://static.nhtsa.gov/odi/inv/2022/INCLA-EA22002-14498.pd...
"ODI completed an extensive body of work via PE21020 and EA22002, which showed evidence that Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities. This mismatch resulted in a critical safety gap between drivers’ expectations of the L2 system’s operating capabilities and the system’s true capabilities. This gap led to foreseeable misuse and avoidable crashes. During EA220002, ODI identified at least 13 crashes involving one or more fatalities and many more involving serious injuries, in which foreseeable driver misuse of the system played an apparent role. ODI’s analysis conducted during this investigation, which aligns with Tesla’s conclusion in its Defect Information Report, indicated that in certain circumstances, Autopilot’s system controls and warnings were insufficient for a driver assistance system that requires constant supervision by a human driver."
"Tesla driver using self-driving mode slammed into police cruiser in Orange County" - https://www.latimes.com/california/story/2024-06-13/self-dri...
"Tesla in self-drive mode slams into police car responding to fatal crash": https://youtu.be/ukq6h55GnvE
Just s/driver/user/g and it sounds like a lot of contemporary LLM hype.
IMO, Tesla's not an outlier -- in today's stock-price-is-king world, it's common to see such overselling in various domains.
Also it still kills people: https://www.youtube.com/watch?v=YgFPW5esM04
> “Tesla is far ahead in self-driving cars,” Huang said in an exclusive interview with Yahoo Finance.
https://finance.yahoo.com/news/nvidia-ceo-says-tesla-far-ahe...
When the dust settles, it will certainly be taught in business schools. And Musk will be in prison (not for FSD specifically).
I watched the shareholders meeting yesterday - it was amazing. Elon repeated all the same things, he kept telling for the past 5 years at least, none of which is close to become a reality. And none was described in any tangible detail - all very vague promises.
As for FSD, autonomy and Robotaxis, one has to remember when it was announced and promoted - when Tesla was close to bankruptcy (per Elon himself).
That’s the problem with research; much of it turns out to be a dead-end, or exponentially more difficult as you approach the goal. FSD looked extremely likely there for a time, but I think the problem was actually AGI in disguise.
You have to wonder if she is dumb, or just knows Tesla investors are totally delusional.
Or perhaps, when you come upon an OG delusional musk worshipper, and call them out, they can point at their money pile and call you the idiot...
God will make it go to $2000.
It is very nearly standard practice in startupland to describe a yet-to-be-developed product in the present tense prior to/during development.
Strictly speaking, it’s lying/fraud, but it is so pervasive and widespread as to be expected and could rightly be called standard industry practice.
This is in no way a Tesla-specific thing.
*: The definition of "work" includes veering incl. but not limited to other vehicles, road shoulders or road divisions, sometimes self stabbing the car incl., but not limited to its driver with road railings or other roadside objects. The car might catch fire as a result or independent of the event if its feelings are hurt, or just feels like it, and burns for days, releasing its densely packed magic smoke, sweat, blood vapor and condensed tears of its designers and builders. The fumes might be toxic. Please don't inhale them.
There’s a timeline where Theranos was acquired for 9b by UnitedHealth if they could keep the grift alive juuust a bit longer and Elizabeth Holmes ascends to the tech firmament permanently while her enablers congratulate each other.
Tesla has even more and deeper financial and branding defense mechanisms. That said, the clock is ticking, now, I think
Holmes and at least some of her supporters still ardently insist, to this day, now that everything is out of the bag, the "pulling filing cabinets in front of doors to specific labs on FDA inspection days so they only see the labs we want them to" crap, all of it, that she, and humanity, have been robbed of the truly magnificent biomedical advances that Theranos was just about to solve.
FSD is like ChatGPT, it works in many cases, it does some mistakes, but it is certainly not “useless”. It won’t replace full time humans yet (the same way that ChatGPT does not replace a developer) but can still work in some scenarios.
To the investor, ChatGPT is sold as “AGI is just round the corner”.
It’s a way for the car industry to fight against their extinction.
In the beginning the argument was: “it’s not the cars killing people, it’s the damn ‘jay walkers’ (term invented by auto industry, btw). Get those people off the road and cram them into the sides of the streets so my fat car can ride freely!11”
That campaign worked, to some extent, and now we have a patch work of sidewalks.
Then later…
Eisenhower (inspired by the ability for the German military to easily mobilize across the country via autobahn) pushed for interstate highways subsidized by the people. Auto industry capitalized on this and this contributed to the invention of the American suburb and slow decay of once walkable urban cores.
Cars were a luxury item. Now it’s a necessity, along with a whole laundry list of items for a car owner:
- gas
- time spent looking for charger, and charging
- parking (less space at home for living and instead using space for car)
- time spent finding public parking
- parking fees
- time spent in traffic
- car repairs
- car maintenance
- car insurance
- yearly taxes for registration
- car sales tax
- car depreciation
- toll fees for turnpike/regional highway
In recent years, people have been realizing how car centric transportation cannot scale (ie, induced demand); and is an environmental disaster.
Now the auto industry’s answer is: “oh we have self driving cars!!1 that’s going to fix it. It’s the damn human that can’t drive!! Have aRtIfICiAl iNtElLiGeNcE hold the wheel! As for pollution, electric cars will fix that” (totally ignoring the carbon emissions to transition to an EV, increased brake dust and tire wear pollution, and rolling 10 yr contribution to e-waste in the form of batteries, and a grid that has traditionally relied on non-renewable sources)
Do you have any evidence car industry is about to extinct?
Seems kinda opposite - even with multiple levels of subsidies, public transport is utter failure in most places.
In the US, I'm pretty sure a significant majority of people are more or less all in on a car centric lifestyle. That doesn't mean it can't change, but I sure don't think it has meaningfully started to change.
https://en.wikipedia.org/wiki/Parkway
And of course it turns out Robert Moses' fingerprints are on it.
I will only reply to your last paragraph:
- EVs need to hardly ever break, so no brake dust and certainly not more than for ICEs.
- Tires only wear faster if you accelerate like crazy.
- Batteries likely last 20 years and afterwards another 30 years as energy storage devices. And then they can be recycled with 95% of the materials being reused.
- And pollution is not the same as CO2 emissions which is what is being addressed.
Out of curiosity, do you have a citation for 20 year old EV batteries being able to be repurposed for another 30 years? Assuming they are used for gridscale storage, 30 years could very easily be 10,000 charge-discharge cycles (thats slightly less than one per day).
LFP batteries have a much longer lifespan that lithium ion, but in a brief search I cant find any claim that they will last a half century. For example, this article [1] says LFP batteries have a "calendar aging" rate (capacity loss independent of active charge cycling loss) of "ca. 0.2 percentage points of capacity fade per month at 25°C and to ca. 0.5 percentage points per month at 50°C". So, in ideal conditions a battery that is kept in storage would take 20 years to reach half capacity and 40 years to reach zero capacity. Presumably daily charge-discharge cycles would reduce that lifespan significantly.
Car centric transportation scales just fine. That’s why most people still prefer it today, at scale. And induced demand isn’t real - it’s just unmet demand. If people want to make more trips, then it makes sense to expand roads to help them do that.
A better question is whether it's 'cost-benefit positive'. That's all that matters when users decide whether or not to use something.
If FSD reduces fatigue and allows you to arrive to work fresher, it might be worth tolerating the odd wrong turn extending the drive time by 2 minutes.
> Without FSD, you pay attention to the road and everything else is within your control. With FSD, you still need to pay attention but now there’s the additional cognitive load to monitor an unpredictable system over which you don’t have direct control.
For the author, FSD is a worse experience in addition to costing a lot of money.
It's comparable to tesla but not as good overall. It 100% reduces my cognitive load and definitely reduces fatigue. There is no question.
Also, I paid only ~$1500.
Is it getting better?
Is it going to continue to get better?
When will it be good enough to not have a driver?
No technology that's actively being worked on is "done". It seems silly to decide that because it isn't perfect today, it's only a useless technology demo.
The point has been made it makes incorrect choices - not all can be easily corrected.
I'm curious about whether self driving is still an impossible task right now, or if it's just a matter of quality between companies - in which case it's possibly a fair bet being made by Telsa execs that they'll bridge the gap given time and money.
More generally, Waymo’s approach is to own the hardware and heavily supervise it with remote workers who can instruct it how to deal with complicated situations (eg lane blocked by emergency vehicle.) Tesla has none of that infrastructure yet. It’s sort of hard for me to see a business model where (1) the user owns the hardware, (2) there are necessary remote human beings monitoring and advising the car in sticky situations (that costs money), and (3) a third party company takes on the liability risk. The idea that you’re going to “rent out” your personal car during the day runs into the question of who pays when someone gets killed/hurt, and that immediately runs into the question of how a remote operator deals with the problem of malfunctioning hardware it doesn’t own (and why it needs to borrow other folks’ personal hardware at all.)
I drove in a Tesla with FSD 12.4 3 times and it was perfect.
it doesn’t mean anything.
The other factor is the trajectory of each endeavour. Waymo are gradually adding more cities, and Tesla FSD is gradually getting more reliable.
Both of them are going to be perfectly fine self-driving systems at some point in the future. It's an open question as to when Waymo will be able to scale up substantially, and when Tesla FSD will be reliable enough to operate as a robotaxi service.
You can see lots of videos of both on YouTube to gauge where they're up to. If you find accounts that are focused on each, you can search by oldest videos to see the progress that's been made and extrapolate from there.
Except this isn't the new FSD stack, it's the old one.
It's a fair criticism of the current offering, but it's not much evidence against a future robotaxi.
Why are people still hopeful for Tesla FSD when other companies are so, so far ahead already?
One of his most terrible (failed) marketing stunts:
https://www.quora.com/Why-did-Elon-Musk-accuse-the-cave-dive...
> I’ve had a Model Y for more than 3 years now, well before Elon revealed himself as the kind of person he really is, and I’ve been happy with it.
Oh for a good and righteous king! But not in this life; not from men.
For example:
* It still won't change in or out of a solid line HOV lane here in Arizona. Feels like an easy fix, but there it is
* I have concerns about its ability to check oncoming traffic when coming out of an occluded side street or alley. For example, my alley (where my garage leads) connects to a MAJOR road that is extremely fast. It is also fairly occluded in the side view by bushes and a light post. A human will move their head forward, crane their neck, and also be able to detect subtle changes in the light and shadows through the bush itself to determine if there's _any_ movement and interpret that movement as a potential car, even if they can't positively see a car. They can inch forward until they can see that the path is clear. The Tesla's side-facing camera is in the b-pillar, behind the driver's head, and at best, it can inch forward (and does) but gaining a high-confidence positive reading that the path is clear is... well, nearly impossible in certain cases that aren't impossible for humans, and that's concerning.
* Parking still takes one too many adjustments, and impatient drivers around you definitely notice it
* At one point, the FSD/AP engine itself crashed on me while fully engaged. Unfortunately, this happened on a freeway connector ramp with a pretty steep curve, and when it crashed, it disengaged the steering wheel and sent us careening towards the barrier: it was a single lane HOV ramp, and we were going about 70 mph, so if I hadn't been hover-handing, it would've easily resulted in a bad accident. This wasn't a case of disengagement or AP getting scared or losing confidence. The engine itself suddenly, without warning, and for no discernible reason, crashed entirely. (It immediately threw an error and said AP System Error/Take Control Immediately.) It then showed the car in a sea of black, as the visualization/FSD engine rebooted. This sort of crash is kryptonite. It's terrifying and its randomness and senselessness and opacity towards what caused it if anything is haunting. Again, a disengagement like this with no driver would result in catastrophe.
On the flip side, I was fairly surprised at how well it handled a lot of basic driving tasks. Visual-only parking still freaks me out (especially since my model HAS ultrasonics, but you disable them when you go visual-only, which is absurd), and a couple turns felt close to the curb, but overall, driving was fairly smooth and decent.
I have the added benefit of living in Phoenix, which is Waymo country. Waymos drive more confidently, and more importantly, are already fully autonomous. They navigate complex environents fairly decently (though, for example, my dad got stuck in one doing loops of a dealership parking lot that confused it a few weeks ago) and they're comfortable to ride in. They're not yet on freeways, but apparently that'll change soon, but they also only go the speed limit, which is Phoenix is... a choice.
Elon keeps pushing this dream of a robotaxi fleet of Teslas, but I agree with the OP that it feels a far way off before I'd be comfortable with the idea of these things fully autonomously driving, and I say this as someone who sees a half dozen Waymos every single day. I also wonder more broadly about the core conceit here: not that fractional car ownership doesn't make sense; it absolutely does, but in the idea that Tesla owners are going to be comfortable with their ~$50k-$150k vehicles roaming around and picking up strangers who... hopefully don't do things to their car, all while hoping the car comes back home. I don't believe Elon was pitching the robotaxi as being wholly Tesla owned vehicles, but it seems like a big societal shift to get people comfortable with their cars having minds of their own, and taking in randos.
The 12.3.6 (or the 12.4.1 that's being trialed right now) is pretty much the current production state of the art for Tesla street level (not highway) single stack self driving where the steering commands are directly issued by the neural network.
It seems the lessons of Therac-25 were not only ignored, but thoroughly trampled underfoot. WTF.
Would clearly show progress and make people understand when they're trying the road that's not proof yet.
I prefer the previous pseudo-FSD which simply stayed in the current lane, and relied on human input for anything more than that. This whole FSD update has been a sobering realization that I really only want the safety features.
Elsewhere in this discussion someone noted that the latest FSD updates don’t apply to highway driving. I don’t own a Tesla and am not a fan of them due to the various privacy issues and reliance on touchscreen controls. But for what it’s worth phantom braking is an issue I have on my non Tesla cars with their driver assistance features. I’m not sure if it is worse on a Tesla, but anecdotally in my friend’s Tesla I was blown away by how comparatively advanced a Tesla feels.
another place where this is in play (also in the ai space) is the openai mumbo-jumbo. while there is plenty of discussions on hn about the real efficacies of genai, we have to agree that it has motivated companies to drop bucket loads of resources into its advancements. same way, the auto industry and researchers got a reason to fund designing the cars of the future.
a smaller example where this worked out was the macbook air and ultrabooks space. while everybody was captivated by the original fitting inside a large envelope, it was an overheating, slow (even for its time) mess for many years. but by creating a clear market demand for the vision, we finally have devices that meet the original vision.
while i cannot stand behind the current driver of FSD tech (or ai for that matter), i respect their role as a catalyst in research.
It’s almost killed me once, when it was about to blow through a red light at ~70mph, while a car was about to turn left in front of me. There was no indication that it was going to stop, so I slammed on the brakes.
Then, after about three weeks, it stopped tracking the lanes properly, and would drive straddling the lane divider. I’d repeatedly enable it when I was in the far left lane of a 4 lane road, and watch as it would promptly begin driving about two feet into the right lane. Disable, and reenabled and it would repeat this engineering feat. It continued for 2 days. Yesterday it was just driving extremely right justified. If there were any cyclists in the bike lane, my mirrors would be intruding into their space.
Of course I bought the FSD function as I knew that I would be attentive and provide feedback, kind of a service to humanity, as this feature is far from prime time, and isn’t safe. I may cancel the service as I’m concerned that my 17 year old new driver will enable it, and not be as attentive.
More and more, I feel that we need at least a couple more paradigm changes - specifically:
- More understandable models - being able to get good answers to "why did you do that"
- Better retraining - Don't do that thing! (without training a whole model)
- Better internal learning - don't wait for the next download
Non ML changes: Public understanding and acceptance of self-driving, favorable insurance and legal framework.Public understanding includes improvements to two way communication between humans and self driving vehicles.
Legal framework includes limited liability for manufacturers. NOTE: I don't love this idea or even like it, but without it we will only get driver enhancement features, not SDC. (If you're angry at this suggestion, know that I am too)
---
My basic approach to all of this is that you need to have a solution to any problems that automation will encounter. The World is a problem generating machine. Most solutions to automation problems involve constraining the problem space. But as that guy said "Life, uh, finds a way"
It never happens at night though, which in my mind makes the shadows hypothesis weaker.
(Also, the autopilot of my wife’s Audi is a huge regression compared to the one of my Tesla. Though that’s the 2019 model, so who knows…)
Musk has been making these kinds of pronouncements for years, and the company has yet to deliver.
He still has friends and believers."
https://www.cnbc.com/2024/06/13/tesla-shareholder-vote-on-mu...
The only way out I see is separate or vastly updated infrastructure - separate roads and highways, and streets with specific signs and markings to help FSD vehicles navigate. For example, to ban FSD vehicles from taking certain risky routes.
Logical fallacy in the second paragraph. Doesn’t bode well.
At the root of the Musk hate seems to be the fact that he (loudly) doesn't subscribe to a handful of political positions that some folks have adopted as axiomatic in their moral worldview. They essentially see him as a heretic, and attack him for stuff that others get a pass on. I mean, Zuckerberg made his billions spying on everyone and destroying the mental health of teenagers. His businesses have a tiny fraction of the socially redeeming value of Musk's. But Zuck doesn't get a fraction of the hate Musk does.
Jeep and others have it but drive like a bowling ball with guardrails up.
Has anyone here successfully retrofitted Audi a4 with ACC? I hear Cruise started like that, would be awesome to get a kit like that.
Gas, brake, steering in cars are controlled by each separate computers with each own firmware - there's no centralized keyboard controller or USB root hub to take over. The computers have to be designed and built to accept driving commands, or replaced by one supporting it, or mechanically actuated. The latter two paths are rarely taken.
Regular cruise works fine on highways, and my experience with adaptive is that it makes passing more difficult as it slowed down way too early (though other implementations might be better), plus the focus of smooth / comfortable slowdowns means it can be hard to realise if you’re watching the road and your surroundings, at which point you’re quite a ways below speed for the passing lane and need to accelerate back up.
Also doesn’t deal well with passing trucks for lack of anticipation.
> Another case where I’m sure it would have corrected itself eventually [...].
Why? That's some bold assumptions. If it's going to fail, let it (as long as it's safe and legal), and see what it actually does.
2. The post is about three test rides. Sure he may have had a destination to reach at the same time, but he was testing.
3. It probably wasn't a life or death situation the way it was written. "The car would have figured itself out eventually" does not sound immediately life threatening.
4. There's no basis for "the car would have figured itself out eventually". I felt the bits I quoted came off a bit too hand wavey for something that is labeled a test. Either don't include it at all, or see what actually happens (within reason).
If we wait for perfect it will never happen.
FSD requires AGI. These systems are all expensive science fair projects and have already killed a number of people.
For the last 2-ish years, companies found a way to throw supercomputers on a preprocessed internet dictionary dataset and the media gulped it up like nothing, because on the surface it looks shiny and fancy, but when you peek it open, it's utterly stupid and flawed, with very limited uses for actual products.
Anything that requires any amount of precision, accountability, reproducibility?
Yeah, good luck trusting a system that inherently just learns statistics out of data and will thus fundamentally always have an unacceptable margin of error. Imagine using a function that gives you several different answers for the same input, in analytical applications that need a single correct answer. I don't know anyone in SWE that uses AI for more than as a glorified autocomplete which needs to be proof-read and corrected more often than not to the point of oftentimes being contraproductive.
Tldr; it is exactly zero surprising that FSD doesn't work, and it will not work with the current underlying basis (deep learning). The irony is, that people with power to allocate billions of dollars have no technical understanding and just trust the obviously fake marketing slides. Right, Devin?
Does something have to be perfect to work? If that's the case we shouldn't have bridges because sometimes they fail?
We can change roads, ffs! At the bare minimum, we could fence off difficult areas and force manual driving. Once many people own an autonomous car, there will be pressure to make roads safe and convenient for them.
Phantom braking for instance, is only a safety issue when the following car gets too close. Lanes can be annotated, etc.
The "AI" part is just marketing to get these cars on the road and set the customer's expectations. Once that is done, the hard parts will just be moved from the car manufacturer to the road builders.
The US has a road network of 7 million km and apparently it costs about 1 million to 45 million[0] to build 1 km of road, depending on the nature of the infrastructure. Obviously the higher end is for large roads built in city centres but even at the lower end of the cost spectrum its crazy expensive.
IMHO doesn't make sense to rebuild the infrastructure for autonomous cars. If you re going to rebuild the infrastructure you can be better of to built continent wide mass transit. At least you won't need parking lots.
[0] https://compassinternational.net/order-magnitude-road-highwa...
And we want to spends billions on autonomous driving so we can continue being locked in the absolute least efficient solution so we can require manufacturing another $10k or so of equipment on hundreds of millions of cars to enable FSD?
This is a sham sold for the money.
Self driving will figure it out anyway (given enough time).
Many of us knew that Elon was full of shit long before but until he started spewing his politics no one cared to listed to even respected industry experts. A sad state of affairs IMO.
If you're buying cars based inversely on how much money the company owner has, you're implicitly ruling out a lot of good options. It is using the language a bit loosely but in some relevant sense we'd expect people who make the most cost-effective products to make the most profit.
The cult of hate seems a lot more like a cult than the alleged cult of personality. This post and comments like yours only make it more apparent.
Tesla is an extreme example, Elon is either Jesus, or Judas. There is no in between. I have a Powerewall, because it was at the time the best product.
When I do talk about it, the second or third question is: "what do you think of, you know, nod, nod, wink, wink".
He's a polarising figure, which makes getting useful reviews out of their products/business practices hard. (same with Facebook, Nestle and other pariahs.)
I mean I kinda get it? But everyone's doing it and it's weird. It's almost like writing a review of meat and apologizing for eating meat beforehand.
I've seen it firsthand living in a very liberal area. 3 years ago people loved the brand. People would strike up conversation about it. It was all upbeat, happy and hopeful.
Now, people rarely talk about it. And when they do it's apologetically. It often ends with some down statement about politics. Someone saying there were no other choices. Just a crappy time.
I love my Tesla. It's a great car. But it's my last. I want nothing to do with this type of politics or conversation.
I for one am sorry for its frontman. It's a good car that I don't want again... That's just stupid of him.
I'm sure I would agree with some of them and disagree with others but they are smart enough to not make that a center piece of their public persona.
Elon Musk decided on his own to
a) be a central marketing and PR channel of Tesla. He has everyone convinced that the company cannot thrive otherwise.
b) involve himself in hot button political discussions all around the globe (usually in a "hot-take-no-need-for-further-reasearch" manner).
c) buy a social media company while publicly lamenting the state of social media. At said social media he (again very publicly) instituted changes to align it with his political philosophy.
Whether one likes or dislikes his opinions, it is very clear that he wants to be seen as some kind of political influencer and that he bases some of his business decisions on this persona.
This is quite unusual and probably the sole reason people even think about the "frontmann" of the company when discussing the product
Do you celebrate the moon landing? The US maintaining a constant lead in aerospace over the rest of the world? Those were built by the Mussolini supporters of the world. NASA and Saturn V would not have existed if not for the nice German folks who were recruited to work for Uncle Sam. Science and engineering are more apolitical than you think.
> When I buy my next car, my requirements will be simple: I want an EV, an extensive charger network along I-80, and an autosteer that’s at least as good as what I have today. Let’s hope there’ll be decent Tesla alternatives by then.
Gee, I wonder if this is going to be an unbiased review of the technology.
Either Tesla and SpaceX are not congruent and coterminous with Elon, or he’s doing fine. Either way, the ad hominem has no place in the discussion of the works of the companies he runs. Both Tesla and SpaceX are kicking ass, so either you have to give him credit for it, or you have to stop bashing the company if you don’t.
As a Tesla M3 owner (in Europe where we don't have FSD yet) I cannot wait to have it for long road trips on highways where I want to relax and have a "copilot" do some of the "thinking" & driving for me.
Yes, there are still some instances of FSD Supervised doing strange things in the YouTube videos people are posting, but it's definitely no longer a "demo"! It's real and it's only a matter of time before it's better than humans in most circumstances ...
Now as a TSLA shareholder, I'm still sceptical that the RoboTaxi/CyberCab system will be 100% foolproof ... But if they can prove through data that it's safer than 99.9% of human drivers on every day journeys and thus get regulatory approval it will be game changing! If Tesla can do 90% of short ride city driving at a fraction of the cost of Uber/Lyft they have a real cash cow on the horizon!
Tesla is near the bottom of the pack on self-driving tech, with no real hopes of going beyond the level 2 self-driving of FSD, which is objectively worse than just driving yourself. There are already multiple companies with either actual self-driving fleets (e.g. Waymo) or limited use cases of full autonomy with company liability (Mercedes), and many others with working prototypes for one or the other, or even beyond.
Tesla is living in their marketing bubble of "FSD robotaxis next year now", where they have been living since 2018 or earlier, while the rest of the pack barely even sees them in the rear view mirror.
Are you _trying_ to make people not take you seriously? No one has the kind of tech Tesla unveiled back at Autonomy day, which is still state of the art technology wise. 100w for that much NN compute was a genius move to manage to get the talent in to build and to get early. Now that they've had more time to develop the tech, FSD is looking better than ever and in a few years Tesla will be licensing this around for a pretty penny. Eventually governments might even mandate it considering the comparative danger of human drivers.
The implication is that Tesla can't make it work. In the same way that Meta are unlikley to make AR glasses a "thing".
> As a Tesla M3 owner (in Europe where we don't have FSD yet) I cannot wait to have it for long road trips on highways where I want to relax and have a "copilot" do some of the "thinking" & driving for me.
depending on which country you are in, you'll be on the hook for any mistakes.
> It's real and it's only a matter of time before it's better than humans in most circumstances ...
This is the fallacy of statistics. Its probably better/similar performance than humans in 80% of cases. The problem is, it performs significantly worse when it fails. It does not fail safe. Thats the really hard part.
I also question the wisdom of using unverified recordings of average drivers to train on. Unless you know how go they are, you're going to be feeding really bad behaviours into the model.
Driving with FSD on seems to be more stressful than just driving.
Driving with FSD on empty roads is very enjoyable and whenever it decides to do something you can recover/correct with no stress.
Driving in poor conditions with FSD feels safer, it sees better than me in the dark and rain for sure.
Driving with fsd in heavy traffic about the same as driving a car yourself, this where I usually drop back to AP.
Is driving my tesla easier or harder than another vehicle? It’s much much easier than driving my ICE suv. But from a pure FSD perspective, if the car didn’t have AP I believe my opinion would not be the same.
I heard that argument for many years for many deep learning applications of significant value. Any day now, right?
Almost 30 years later, OCR and voice recognition, now backed by machine learning, is far more impressive than it was back then, but it still keeps making mistakes that I have to fix. And those are far, far easier problems than driving.
And this goes on for a bit longer up to the present day.
After close to a decade of failed promise again and again and again I think we can safely dismiss the "perhaps a bit too optimistic" good-faith defence.
But hey, maybe this time it's just around the corner, cross my heart hope to die. And maybe the rapture will be upon us next year like that guy is preaching on the street corner (for real this time – last chance to repent sinners!)
And I don't think that you understand it either. In the real world, all exponentials are just the early part of an s-curve. Everything has limits, reaches diminishing returns and tops out.
If a technique is already topping out, there won't be great leaps in performance without fundamental changes. An incremental "update" won't do it at that point in the curve.
You can pick examples from several hyped technologies today that are going to work properly and change the world "real soon now".