Tesla, et al do not have the luxury of ignorance to explain that away however, they know what the technology is and is not currently capable of, but they don't want to admit it.
If the explicit goal is to create a human intellect, then sure, there's a really interesting conversation there—one that is happening constantly in the DL/AI research community, in which virtually no one believes that we're close to AGI or that current deep learning is going to achieve it.
But that's explicitly not the goal that 99.9% of neural networks are designed with. Their traditional use case is where they excel: programmatically approximating functions that are exceedingly hard to approximate manually.
This includes but is not limited to image recognition, speech synthesis, recommendation (including search), fraud detection, ETA prediction, even medicinal chemistry.
If you see humans responding to animals that don't use eyes (e.g. bats, insects) fuckups are a constant. We are very bad at interacting with anything that doesn't have something similar to our eyes to observe the world.
And third, the world has almost entirely been rebuilt to compensate for human observation flaws. It's not just staircases having a step height that works well with humans, but for example highway intersections have been changed 100 times until we found one that humans respond to in a manner different from slamming into the split. The same is true for many intersections (I first started realizing this when reading an article that an intersection with a bridge was modified because 5 people died when a car crushed them against the side of the bridge. It was redesigned. Now we find that an algorithm with an entirely different set of observations makes different mistakes ... not really that strange. Perhaps we should start modifying streets algorithms misjudge).
For example the warning cones for when you have an accident or road works or the like have also been adapted many times because version X was "causing too many accidents".
So in a bunch of cases it's neither that humans don't have big observational flaws or that algorithms have many more. It's just that we largely eliminated the human ones. Not by eliminating them from humans, but by eliminating them from the world.
Same is true on the inside of buildings.
Microsoft’s Kate Crawford: ‘AI is neither artificial nor intelligent’ (https://www.theguardian.com/technology/2021/jun/06/microsoft...)
There are a lot of tasks that we humans do the same way as a machine does- repeating a set of mental and/or physical patterns until it becomes second nature to us. Those are called "habits" and those are precisely what machines are good at doing.
Intelligence is different kind of processing. It resides in the particular form of processing most often found in the mammalian brain — a processing we know intimately as conscious experience. Every human thought, word, and innovation formed within human consciousness. There’s no difference between consciousness and intelligence — they are the same.
It’s here at the “hard problem” that most (but not all) ML research turns aside to follow the “bitter lesson”, hoping that the difference between instinct and intelligence is merely one of scale.
But as OP points out, the difference is one of kind.
Yes, most people do not know what the difference between ML, AI, neural nets, and computation is - nonetheless we've reached the point in humanity where there is no question a pandora's box has been opened. There is very real reason why there would even be gag orders on public information given an entity achieved some level of strong AI.
And to your point of it just requiring more training, yeah, it kinda is that simple for the majority of tasks which is also enough to offer serious contemplation. A very wide depth of weak AI solutions that fake "strong ai" will probably be more dangerous long-term than a true "strong ai" solution due to the fine-tuning problems it would naturally have.
Big discussion, overall we need to be less certain on the state of things because there is very good reason why such an event would _not even be obvious when it happened_. A time of uncanny valley at the most and then you realize oh shit, AI has been running the world since... APT and DDoS patterns.
Would help if Elon Musk didn't tweet out ridiculous claims about FSD..
Will they ground all the Tesla cars remotely? Or disable autopilot remotely? Until they have gathered new training data and updated the software?
Not saying cars should do the same, just that it's not absurd to consider it.
If everyone was flying their personal planes around and constantly banging into each other a minor software bug would not ground any planes. It would be more like a malfunctioning airbag recall.
What we need is a big fat lawsuit, since the government will not do anything to anger the only profitable US automaker.
I tried FSD on the $200/month plan and dropped it: It makes the car unsafe. To command a lane change you hold down the turn signal stalk. If you fail to hold it down long enough the car suddenly swerves back to the lane it was in. This is (to say the least) disconcerting at 80 mph.
FSD can also suddenly decide to do weird things that are difficult to correct even when you're paying close attention. It's unnerving. Ordinary autosteer (which is included with every Tesla at no extra charge) works well enough for me and it fails in more predictable ways; it's easy for me to build a mental model of its limitations. I'll stick with that.
And there have been at least two incidences that I can recall where Autopilot saved me from a wreck.
I would not choose to go back, and would buy it again without hesitation.
So... yeah it's gonna take a lot of deaths for this to get regulated. It's a shame cuz we already basically know how to regulate this.
Self driving cars will make fatal mistakes but I have no doubt that tesla very soon will be able to be safer than the average driver.
Plus the more autonomous vehicles on the road the more safer it is.
Finally currently traffic accidents are the leading cause of death for 30yo so we aren’t exactly replacing a perfect system
Humans generally do not give their cars haircuts by slamming them under a stopped cargo truck. They do not generally smash straight into stationary emergency vehicles.
We can’t handwave basic safety issues away by saying “in aggregate, they perform better than humans in most conditions.” The basic safety issues get people killed. Leaning on some “average driver” fallacy is a way to ignore issues core to the tech stack.
It's not uncommon: https://en.wikipedia.org/wiki/2009%E2%80%932011_Toyota_vehic...
So even when those cars ram into fire trucks from time to time, it would be better to let them do their thing. Otherwise people will grab the steering wheel, drive drunk, sleepy, angry etc and ram into all kinds of things again.
Currently, there are 6 million car accidents per year in the USA. Almost 100 people die in car accidents every day. So there is a ton of data to make the decision.
This sort of statement keeps being parroted over and over again. As Linus would say, talk is cheap, show me the code; then we can speak.
>So even when those cars ram into fire trucks from time to time [...]
This is just insane, honestly, and if this is the premise that guides the development of these sort of systems I'll be glad to never set foot on one.
I have been driving for almost two decades with 0 accidents. I'm not saying I can't have a lapse of judgement or do something stupid going forward, but I certainly won't mis-class an object, nor kill myself over it.
I hypothetically want bad drivers to be replaced by AI because it's likely already better. But replacing everyone with AI (at the current generation of AI, which isn't the first, nor the last) will undoubtedly lead to tons of avoidable deaths, and I'm not keen on drawing a lottery ticket for it.
It gives me a wary feeling when people talk about tech regulation and warn that it would change the internet as we know it. Like, if putting the externalities on the company means the company can’t exist as it does today, is that really so bad?
If the error rate is only 0.5% but the death rate of those errors is 100% I am not sure it will be justifiable.
Ex: Imagine a week where self-driving cars have a bug that only mis-identifies grandmas. Only a few grandmas die but the perceptual impact is massive.
36,096 deaths in 2019 in U.S. ~1.3 million worldwide (I couldn't find injury statistics this morning)
If the flaw is found before someone dies from it I'm not concerned. If 1 person dies instead of 10 I'm all for it. (I'd take 2x better than humans any day)
Unlike autopilot which could still be crappy after 10 years
Yet we have made them illegal after some rather nasty precedents. Seems like hsitory repeats itself
I believe that there have been studies showing that today's "self driving" cars are already statistically safer than regular cars.
I'd feel best if that decision was made at the smallest community level possible, so ideally county by county rather than federally. That lightens the burden of politicians making the wrong choice or being a citizen who disagrees with the right choice.
Having to switch the mode of operation of your car depending on what side of various county lines you are on seems like an obvious regulatory failure.
Others are bring up tired, drunk, texting... All real problems, but following too close is universal to nearly all drivers.
If I can reduce the error rate by 90%, but the remaining 10% are "random" (whatever that means), is that worse than not reducing the error rate?
We don't have a good frame of reference for how machines might behave with their failures, which means that accidents could be worse than they would be otherwise.
Palantir is now a "pure-play AI company"? (And, for that matter, a market cap of $50b is 'less than outstanding'?)
Less than outstanding outcomes
Market cap is their outcome, not their clients' outcomes. The two are decidedly different things, especially in our weird distorted market.
You can see this everywhere. For example those app that generate a non existing person. A lot of times the results are great except for that one spot which makes the overall result useless.
Another example is the OptiX denoiser (NVidia). You can get very nice renders in a few seconds which speeds up the workflow. But every time it has areas with a lot of flaws. This doesn't matter when you are still working on something but for production it is useless.
ML has it's use in a lot of areas where the outcome doesn't have to be perfect. But I am still not convinced it is 'production ready'.
Things like:
- the "warm" sound of vinyl records. - nostalgia for early myspace, tumblr, geocities, web design - faux edison lightbulbs - low vs high frame rate movies
I wonder if the flaws of all the current ML techniques will eventually be thought of similarly.
Article 22 guarantees that people can seek a human review of an algorithmic decision, such as an online decision to award a loan, or a recruitment aptitude test that uses algorithms to automatically filter candidates.
In May, a government task force set up to look for deregulatory dividends from Brexit, led by the leading Brexiter Iain Duncan Smith, argued that Article 22 should be removed because it made it “burdensome, costly and impractical” for organisations to use AI to automate routine processes.
The idea is part of broad-based plans for a big overhaul of the UK data regime after Brexit which ministers say will boost innovation, and deliver what Oliver Dowden, the culture secretary, has called a “data dividend” for the UK economy.
https://www.ft.com/content/519832b6-e22d-40bf-9971-1af3d3745...
(Edit: formatting/link)
Sounds fair enough, especially if the spammer either has to pay the costs of the review (ie. a few dollars), or is limited to being only allowed one review per month/year to prevent abuse.
Would I be happy for it to be driving around on the road? Probably.
Would I be happy for it to drive me, and it's 'my fault' if I don't notice it's gone wrong and kill someone? No.
So far Tesla (for example) seems nowhere near the point where they would accept responsibility for crashes -- they still always blame the driver for not paying attention.
It does not even work in difficult situations and busy chaotic streets, and that's when all the crashes happen.
It's like the statistics that says sharks only attack near beaches - that where 99.99% of people are!
I'm all in for a way to reduce any accident.
Nope. I won't be part of that statistic.
in my 20s you would need to bring the reckless back in.