Kind of interesting to consider how this adjusts priors on these outcomes:
- Self driving is not possible
- Self driving is not possible anytime soon
- Self driving is possible, but requires LIDAR
- Self driving is possible, and can be done with normal cameras
I've engaged with a lot of people who presume self-driving reduces onto AGI, therefore it will not be achieved anytime soon if ever. I wonder which of their assumptions is wrong, if this ends up being successful.
> To me it is sufficient to beat the top 20% of drivers.
Computerized safety systems can, should, and are raising the bar, in terms of safety for the top 20% of drivers. "Self-Driving Cars" aren't competing in a static environment, similar tools are raising the safety bar that they have to compete against. This means it will get increasingly difficult to beat non-self-driving vehicles.
> To me it is sufficient to beat the top 20% of drivers.
It's interesting that you set the threshold there (I presume you meant 'beat the bottom 80% of drivers'). Rationally, we should be happy if their driving performance is above average. I fear that the public (and the courts and the insurers, etc) will require airline levels of per-mile safety and still be wary.
I would be fine with beating average but as soon as this thing kills someone, I want there to be a significant difference in accident rate so it does not end up in regulatory hell.
I think from a statistical standpoint, beating average is great. But the big difference is that we have this odd and fundamental requirement for justice/punishment in our society that gets lost with self-driving cars.
If a human driver kills someone / injures someone / damages property, people get satisfaction or resolution when the human is punished - insurance increases, license points, jail, etc.
But when a self-driving car - even if it's better-than-average - does something wrong, there's nobody to punish and no retribution to exact, so people will be left feeling unsatisfied. That's why the bar is going to need to be so much higher.
There's also the bias (accurate or not) of "most people are bad drivers, but I'm better". Beating the average only beats the average. It improves things across the board (and that's awesome), but people who are particularly careful or skilled will balk until it's perceived as better than they are.
Pick something you pay extra attention to. Now imagine being informed you can't do that any more, you have to take what [this robot] does for you. Also remember all the terrible UI changes in software that have been done for a majority instead of your use-case.
Reluctance until it's substantially better seems reasonable to me. Somewhat unfortunate on a species level, but not for many individuals.
You forgot a fifth one: Self driving is possible but follows the 80/20 rule that states that the last 20% of work requires 80% of the effort. We probably haven't even started on that 20%
Waymo already "self-drives" today, it just is extremely slow at making turns when the road is busy, and if there is any kind of unusual obstacle on the road (such as construction cones that require you to drive partially in another lane) it just stops completely and has the rider wait for twenty minutes for a manual operator to come and take over.
Those two obstacles won't be overcome until AGI. At best their frequency will be brought down a bit but not by nearly enough orders of magnitude.
The big concern is that driving as a human skill is reactive and adaptive, whereas ML (and software in general) models are pre-baked. If something happens outside the car's model, it will react unpredictably, and strange circumstances can arise while driving. AGI, as based on human cognition, would have the capability to adapt to as-yet-unseen circumstances.
I would suggest visiting r/idiotsincars sometime. Humans are terrible at reacting to situation outside of their experience, and essentially do random things all the time.
If an AI system's default response is "come to a stop safely" then it's going to be way ahead of a lot of human "unexpected situation" handling in cars.
There are many situations where "come to a stop safely" is the worst possible thing you could do.
Yes, people are bad at driving, because they don't pay attention, panic, make mistakes etc. But ML models tend to freak out at slight variations on mundane circumstances; a cyclist crossing the road at just the right angle and the wrong colour of bike, that sort of thing. The thing self driving cars need to avoid is killing people in broad daylight for no discernable reason, and that seems like the kind of thing that you'd need a mind for. It's the same issue as with adversarial image manipulation to fool image recognition; if changing 3 pixels can turn a frog into a toaster, you aren't really "seeing" the frog at all in a symbolic way, and not seeing a road symbolically seems like a recipe for disaster.
> The thing self driving cars need to avoid is killing people in broad daylight for no discernable reason
This, I think, is the thing that people miss when they say "self-driving cars don't need to be perfect, they just need to be better than human-drivers, who aren't actually all that great".
From a public confidence perspective, it doesn't matter if a self-driving car crashes one tenth, one one-hundredth as often as human drivers; as soon as you see a self-driving car kill someone in a situation that a human driver obviously would have avoided (like in the adversarial image kind of scenario), you've totally destroyed any and all confidence in this car's driving ability, because "I would never, ever have crashed there."
Yeah it's been really odd to see the take that self-driving must require strong AI. It needs to be done carefully, but it's clearly a manageable engineering problem if you have good sensors.
If there's a person at an intersection directing traffic, it will be very hard to have the car itself communicate with them as easily as a human can. Edge cases like that is where AGI would be needed it seems.
So in this hypothetical is the person directing traffic completely oblivious to the existence of self driving cars? Pretty sure we can assume traffic cops in the future will be trained to deal with self driving cars and use only use gestures from a predefined list.
I would believe that people directing traffic usually use only a handful of signals, but it's certainly not a universal truth. This is one more case of the 80/20 problem that self driving tech keeps running into.
Sure it's probably feasible for cars to handle hand signals in the happy path, but anything outside of that will be disastrous. How will the car understand and communicate with a person who doesn't use the standard signals, aside from having some level of intelligence?
> People directing traffic use only a handful of signals.
Sometimes. Other times they confusingly gesticulate or just shout out things, or even give conflicting signals. Humans can interpret these without much effort but it's a hard AI problem.
self-driving is more of a language problem than a technology problem. In 2005 grad students had cars driving themselves on a course. Tesla and Waymo both have cars that very drive well on many roads. self-driving is here, now all that is left is for people to argue about what "L5" means as the systems improve. They will improve as there is a clear path to improvement.
I don't expect that we will ever see the day where everyone is OK with self-driving cars on the roads regardless of the safety statistics, because there will always be edge cases of crashes and personal preferences around driving styles.
I think from the original DARPA challenges most researchers knew that while self driving is possible or more correctly "a promising direction", but its application to the realworld and its robustness was a far larger barrier back then.
We have the advantage of retrospect when looking at these claims which now may seem more possible then before due to our better understanding of the difficulties and technologies to address them.
I think many people confuse their excitement for the promise of having self driving cars and the actual technical and political barriers that still need to be addressed to bring this to reality (ranging from robustness, infinite number of edge cases, perception, insurance, or public policies).
> I think many people confuse their excitement for the promise of having self driving cars and the actual technical and political barriers that still need to be addressed to bring this to reality
I think most of the confusion is due to deceptive marketing. Sales people like Elon Musk have been saying that the big dream of true self driving is just around the corner for years now, even though they were nowhere close.
Self driving for taxis is a very different problem than for personal vehicles. For the latter you can always rely on the human driver to handle the last 1% or 0.1% edge cases and still provide a ton of value. Taxis don't have that option, so it really is perfect level 5 automation or bust. "Good enough" doesn't cut it.
Taxis do have the option. Cruise remote operators "guide" taxis once every 5-10 miles during peak hours and thy expect to do so after public launch
https://youtu.be/sliYTyRpRB8?t=212
Define “soon”. For me it means 50 years. That’s an infinitesimally small amount of time. If we can have level 5 autonomy in 50 years I will say that progress was rapid and we really excelled.
Here in 2021 it has been well over 50 years since the first transistor was built.
> The first working device to be built was a point-contact transistor invented in 1947 by American physicists John Bardeen and Walter Brattain while working under William Shockley at Bell Labs. The three shared the 1956 Nobel Prize in Physics for their achievement.
Yup. This whole era from WW2 onwards (the 3rd/4th Industrial Revolution?) will be looked at as covering 1950 to probably the mid 21st century hundreds of years from now.
Self driving was never impossible, just infeasible with tech at the time. Surely it'll be standard in a few decades, despite that being quite far into the future.
Given sufficient qualitative and quantitative investment self driving may be technologically feasible at some point in the future but not price competitive with human driving.
Self driving proponents seem to assume the tech is free. Just keeping sub meter scale 3d street maps up to date has a massive cost.
There are enough people in the developed world who are physically unable to drive or use public transportation in their area and who also don't want to be homebound that the economics could still work out even if they were the only market, which they aren't.
> Just keeping sub meter scale 3d street maps up to date has a massive cost.
You seem to assume there is unmet demand that you can meet cheaper with automation than with human drivers. I agree there may be unmet demand but I don't see you get automation to be cheaper than a $10 an hour cab driver. I mean if automation was cheap and easy then our factories would all be automated before something quixotic like a car. It is frankly absurd.
Well, for one thing, I don't want anyone in populated parts of the US to live on $10 an hour, not even a cab driver. What happens when we double that? Cost of living goes up over time, but cost of technology goes down. When do we reach the tipping point? Have we already?
If wages go up to $20 an hour, then maybe there is general inflation and robotics goes up to $200 an hour. In the world today we have cheap labor and expensive energy. That is why automation is not replacing humans. Look into the history of the British industrial revolution.
I have probably been in far more American factories than you. Obviously I am not going to answer your question directly because I have an interest in protecting my career or privacy but in my extensive experience in mamufacturing many things that you might think could be automated are not because it is still cheaper or more effective to pay a worker $15-25 an hour than to spend $15,000 on automation that doesn't work right half the time.
Here is the reality that self driving will confront:
cost effective, reliable, competitive automation is hard, even for seemingly closed mundane taskes like palatising products.
Yes, it's expensive, however, as we refine algorithms, and the processing systems, it gets cheaper, especially amortized over more cars, where the per-car cost becomes affordable.
ML doesn't work well enough for offline maps generation either, and all the high quality maps require human editors for final touch.
All this work is currently done because realtime perception doesn't work well enough, and you can have a much more reliable system with the aid of the maps. Having a 3D base map of the world makes the realtime perception problem far simpler, and it makes fancy sensors less critical.
In the US, where the cost of labor is very expensive, self driving will make sense, even if it's expensive, but someplace like China or India, where a middle class person can afford a driver, it probably makes less sense, though the push for it in China is probably the strongest that I've seen anywhere in the world.
So you think self driving cars will be able to profitably offer a 10 mile ride for $20 in suburbia of second tier cities like Springfield MA? If your opinion is that cab drivers make much more than $10 an hour today then I suggest you look at things outside the bay area.
Self driving does not scale at all now. Because as you said these special maps need to a lot of human labor to make and the cars need a lot of sensors on top of the auto patform. Labor is not even the main cost driver in person transportation.
I am in the AGI camp, but have wondered and still wonder,
Why are efforts not focused primarily on interstate/highway travel, specifically, collaboration with DoT for mesh/distributed/coordinated long-term travel?
I don't need a self-driving taxi. I would pay 20K for a car which participated in a federally-regulated framework which let me let go of the wheel when I get on a highway and let teh emergent cloud determine how best to move my car and all the others on dedicated/reserved lanes in coordinated "trains" for hours.
The wins here seem like no-brainers. Sidestep all jurisdictional nonsense; optimize commerce and personal travel; automatically handle emergency vehicles and other unusual conditions; etc ad infinitum.
All my car needs to be able to do other than existing lower-tier self-driving/driver assist basics, is join the borg.
Coordination for traffic flow management seems like an unbelievable win.
But no, all we seem to be getting is cyclist-terminating taxis which cost $250K each and are, IMO, doomed in target-rich environments like SF to not forseeably adequately the last 8% of anomalous novel cases.
If you just want a car that drives itself on freeways, get a Tesla. Probably 80% of my car's 15,000 miles have been on autopilot. It automatically changes lanes to pass and automatically gets out of the passing lane afterwards. It takes offramps and interchanges. It automatically brakes for obstacles. It aborts lane changes if someone else gets in the way. It even works in rain and light snow.
Other car companies are a few years behind, but even something as simple as adaptive cruise control + lane centering is a huge help on freeways.
It has killed people letting it self drive on the freeways. It was a while ago and maybe it has gotten better since, but I don't think taking your eyes of the road is advisable.
The latest update has eye tracking and warns you if you take your eyes off the road for more than a second or two.
Nobody is saying that self-driving cars are perfectly safe. Considering how many Teslas are on the road and how many miles are driven on autopilot, it would be surprising if there weren't any deaths. As long as it's safer than unaided human drivers (which it is), it's a net win.
I agree. I think the problem has been the ride hailing companies shifting all the attention to being self driving taxis.
For 95% of people I think the value is in letting them do other stuff while driving long distances/times instead of stuck behind the wheel. This seems many orders of magnitude easier than trying to tackle inner city driving.
- Self driving is not possible
- Self driving is not possible anytime soon
- Self driving is possible, but requires LIDAR
- Self driving is possible, and can be done with normal cameras
I've engaged with a lot of people who presume self-driving reduces onto AGI, therefore it will not be achieved anytime soon if ever. I wonder which of their assumptions is wrong, if this ends up being successful.