Many comments in this thread are variations on a theme of "self-driving cars don't need to be perfect, they just need to be better than human drivers, who aren't actually all that great." I think it would be nice if this were true, and I suppose it is from an actuarial perspective, but it's also an extremely flawed point.
From a public confidence perspective, it doesn't matter if a self-driving car crashes one tenth, even one one-hundredth as often as human drivers.
If you see a self-driving car cause an accident, particularly a lethal one, in a situation that almost any human driver would have avoided, you've totally destroyed any and all confidence in this car's driving ability, because "I would never, ever have crashed there."
As we've seen, there are lots of scenarios like this. The Tesla crash from last year, where the car simply didn't see a white truck against a light background. Or imagine an adversarial image attack, where some tiny insignificant detail is placed onto a stop sign or a "do-not-enter" that turns it into nothing from the perspective of the AI driver.
These kinds of scenarios obliterate public confidence in self-driving cars, because intuitively, you immediately realize that you're "a much better driver" than this car! Even if that's untrue 99/100 times, it only takes one visceral example to drive this kind of wedge.
Self-driving cars don't just have to be better than human drivers. They have to be as close to perfect as is possible, because that's what people will expect.
>human drivers, who aren't actually all that great.
A large fraction of human drivers are actually all that great. The majority of accidents/deaths are caused by a minority of terrible drivers, or good drivers who found themselves in terrible but rare circumstances. The majority of drivers drive hundreds of thousands of miles without any accidents that were their fault, or even any accidents at all.
In other words, it's probably easy to beat the mean human driver, which is greatly dragged down by a minority of terrible drivers. It's probably very difficult to beat the median human driver, and near impossible to beat the top 20% of human drivers.
I don't think it's easy to beat the mean human driver and to demonstrate with solid data that you've done so.
In 2019 in California, there were 1.06 deaths per 100 million vehicle miles traveled. Any self-driving automobile technology that doesn't have at least 1 billion vehicle miles of data is in no position to claim that it is safer than human drivers and less likely to kill people.
Self driving cars don't make the same kinds of mistakes as human drivers do, but they make different kinds of mistakes. Some of these can be fatal.
>I don't think it's easy to beat the mean human driver and to demonstrate with solid data that you've done so.
Agreed. I should have written "relatively easy."
> Any self-driving automobile technology that doesn't have at least 1 billion vehicle miles of data is in no position to claim that it is safer than human drivers and less likely to kill people.
The circumstances under which those miles are driven (e.g. road type, location, weather, time of day, etc.) also have to be consistent with circumstances under which humans are driving. 10 billion autonomous vehicle miles driven only on highways in broad daylight is a worthless point of comparison, whereas 500 million miles driven across a variety of conditions representative of the full human driving population is worth a lot more.
This is key, there's expectation and some wiggle room that as a human driver, humans will fuck up predictably and experienced drivers know how to avoid getting into incidents when this happens (usually).
Self-driving cars are weird to drive around. They will absolutely stop in situations where no human would think to stop. I think about this as a motorcycle rider, what if I'm committed to cornering on a corner I can't see around and the software decides on a self-driving car that it should just stop in the middle of the road after the apex? A human driver could do this too but many will know that this is a dangerous place to stop and try to put the car on the shoulder or minimize the amount of time it's stuck there.
I don't know if this is something we need to tolerate a temporary increased incident rate on as people get used to them being on the road, or if we need to make the software drive more like humans (with the assumption that means potentially making the behavior act sloppier than it can handle so that increased software reaction rate doesn't cause humans with slow reaction rate to slam into them)
The mean is given by the number I posted, about 1 death per 100 million miles traveled. That number includes drunk drivers, distracted drivers trying to text, everything.
The point is that "1 death per 100 million miles traveled" is the mean average, but most drivers do better than the mean. Mean, median, and mode are not the same and the mean crash rate is not relevant to most drivers.
* 1,066 Alcohol-impaired driving fatalities (fatalities in crashes involving a driver or motorcycle rider with a blood alcohol concentration, or BAC, of 0.08 or higher) in 2019.
* 620 Unrestrained passenger vehicle occupant fatalities in all seating positions in 2019.
* 164 Teen motor vehicle fatalities (age 16-19) in 2019.
* 972 Pedestrian fatalities in 2019.
* 133 Bicycle fatalities in 2019.
assuming the above (alcohol-impaired, unrestrained passenger, teens, pedestrian, bicycle) are all all poor-driver related that leaves 651 traffic fatalities.
Not really answering your question, but CDC says 28% of all traffic-related deaths in 2016 involved alcohol. Excluding these would immediately improve the mean performance.
I see why LMGTFY had its day in the sun: you can literally paste the first sentence of parent’s post into DDG, and the first link answers your question. Hell, the preview answers your question, you don’t even need to click it.
Not sure about that. I'd say from the people that are very close to me (friends & family), I wouldn't want to be a passenger with half of them. AI is /so much better/, can't wait for it to be mainstream. And it's not just about the AI driving, it's about the AI reacting 100X faster and having eyes all around the car to avoid accidents before they could even happen.
I wonder what humans will actually do better than AI in 50 years. I have a personal theory but I'm a bit off topic here
Exactly right. Furthermore, most risky behavior is a choice. Crashes aren't random "acts of god" that strike anybody with equal likelihood. If you choose not to drive drunk, drive in bad weather, or for many hours without rest, then you can greatly improve your odds and almost certainly reduce your risk below the average. In these discussions I often see far too much fatalism; "Everybody thinks they're above average but half of you aren't". ignores both the fact that crashes aren't distributed like that, and the fact that the riskiest behavior is a choice. The mean number of miles driven drunk is greater than zero, but the number of miles I drive drunk is zero.
Here's a particularly spicy viewpoint: It doesn't matter if you obliterate public confidence in self-driving cars, or if there are lethal accidents that would've been avoided.
As t approaches infinity, what are the chances that self-driving cars won't take over the world?
If you think governments can't simply legislate away people's freedom to drive in the service of corporate profits or what's "best for them", you haven't read the news lately.
Freedom clearly does not work in a self-centered society. We wouldn't need such drastic actions if enough people voluntarily did the right things. But not enough do, so here we are.
I'm really beginning to get sick of seeing people use the word "freedom" when they clearly mean "no personal responsibility while living in a society among other people".
You must be reading different news. Governments (especially the US) can't even get people to stay at home for a while to avoid a deadly disease.
It's borderline impossible that they will "legislate away people's freedom to drive" anytime soon.
Even if a startup appeared tomorrow that could unequivocally show that they have a perfect self-driving car that runs on fairy dust and cleans up cities as it goes, people would still demand their right to drive ICEs.
At least here in Germany, even "let's maybe have a speed limit on all the Autobahns" is highly contested. (And one half-joking suggestion has been to introduce a speed limit only for ICE cars as a motivation to go electric)
Counterpoint - nobody actually cares about traffic fatalities. Nearly 40,000 deaths a year in the US, and the majority of people get in their cars every day without ever thinking about this risk and go about their lives (or to put that another way, the risks are already so low as to be negligible to most people, and anything else within the ballpark of negligible is still negligible). Normalcy bias is incredibly strong and as soon as self-driving cars are "normal" people will get on board without thinking twice. Tesla is slowly acclimating people to self-driving, basically everyone is familiar with the idea at this point, and as soon as it's available and someone tells you it's "just as safe as driving yourself", most people will just go with it. Especially given how big the upside is - you don't have to deal with the stress of driving anymore, you can just relax in your car. Or in terms of getting a ride, maybe it's 1/4 the price of a taxi driven by a human. Sounds good, people will roll with it.
Of course the more it starts taking off, there will always be a vocal subset of the population that is strongly opposed to it, just like there are vocal anti-vaxxer groups and there were anti-seatbelt protests back in the 80's. But I can't imagine the naysayers having a very big impact on the progression of the technology, the upsides are just too enormous.
I'm a bit surprised by how negative a view HN has towards human adoption of technology.
What technology is perfect? What code is perfect and doesn't have bugs in it? And yet we adopt automated systems anyway. Yes, sometimes it is painful and an entire airline grounds all planes for half a day... But that doesn't stop the unending march towards efficiency and technological progress.
I think the point is that the general public is fickle and their trust in self driving cars is tentative. If the public loses confidence in the technology it will make it very difficult for them to roll it out. So this isn't really about what HN thinks, it's about what HN predicts the general public will think.
> Self-driving cars don't just have to be better than human drivers. They have to be as close to perfect as is possible, because that's what people will expect.
Though I disagree because people expect a lot of things and alternative outcomes happen when there are incentives (Current cars, airplanes, and elevators come to mind). Waymo seems to be aware of this from this recent video. [1]
This is arguing that because humans are stupid and biased, they will believe they are better drivers despite all the evidence to the contrary, and therefore, a solution needs to be close to perfect so that humans stop being fearful.
We have seen this before with autopilots in aircraft and ship, elevator operators and so on.
All it needs is time and adoption, not perfection, and as adoption increases, the roads get safer and safer, bringing us closer to the ideal, and as time increases, the closer we get to adoption.
The world doesn't work in idealized ways. Yes, perhaps ideally self-driving cars would need to be nearly perfect in order to win wide public acceptance. At the same time there was an article on the front page today investigating literal gulag labor camps in the USA, and I doubt those are popular or going away any time soon. Once this is working technology that can turn a profit, it will depend on who stands to gain, how deep their pockets are, who gets bribed, which talking points get pushed, and which votes get bought. Whether the public ends up particularly happy about the outcome is at best, secondary, and at worst, irrelevant. Don't think I've run into too many people happy about our healthcare system, yet that remains stubbornly broken. No one I've met really wanted to lose our lower-middle class to China, yet there it went.
People don't usually pay too much attention to what the government relations department has been saying to the NHTSA. It doesn't have the be the same thing Elon is tweeting.
Watching the Waymo video just changed my viewpoint.
They have a "Pull Over" lever in the back seat.
Would I trust a self-driving vechicle without Lidar--probally no.
Would I use use a self driving vechicle commuting, and around the city--yes. Two driving chores I hate. I hate them to the point that the philosophical argument over dying by my own hands, or a computer, is put in the trunk.
On a personal note, the thought of a computer driving me off a cliff while driving to Stinson beach is not something I would chance. Even if they are statistically better drivers than most humans.
I can't imagine tumbling down a cliff, and thinking if only I drove today.
I still foresee most trucking jobs, and most driving jobs, completely gone in a few years.
I agree that these incidents are concerning, but you mention Tesla's crashes, yet people are still buying it? I think you are underestimating people's laziness.
From a public confidence perspective, it doesn't matter if a self-driving car crashes one tenth, even one one-hundredth as often as human drivers.
If you see a self-driving car cause an accident, particularly a lethal one, in a situation that almost any human driver would have avoided, you've totally destroyed any and all confidence in this car's driving ability, because "I would never, ever have crashed there."
As we've seen, there are lots of scenarios like this. The Tesla crash from last year, where the car simply didn't see a white truck against a light background. Or imagine an adversarial image attack, where some tiny insignificant detail is placed onto a stop sign or a "do-not-enter" that turns it into nothing from the perspective of the AI driver.
These kinds of scenarios obliterate public confidence in self-driving cars, because intuitively, you immediately realize that you're "a much better driver" than this car! Even if that's untrue 99/100 times, it only takes one visceral example to drive this kind of wedge.
Self-driving cars don't just have to be better than human drivers. They have to be as close to perfect as is possible, because that's what people will expect.