Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are many situations where "come to a stop safely" is the worst possible thing you could do.

Yes, people are bad at driving, because they don't pay attention, panic, make mistakes etc. But ML models tend to freak out at slight variations on mundane circumstances; a cyclist crossing the road at just the right angle and the wrong colour of bike, that sort of thing. The thing self driving cars need to avoid is killing people in broad daylight for no discernable reason, and that seems like the kind of thing that you'd need a mind for. It's the same issue as with adversarial image manipulation to fool image recognition; if changing 3 pixels can turn a frog into a toaster, you aren't really "seeing" the frog at all in a symbolic way, and not seeing a road symbolically seems like a recipe for disaster.



> The thing self driving cars need to avoid is killing people in broad daylight for no discernable reason

This, I think, is the thing that people miss when they say "self-driving cars don't need to be perfect, they just need to be better than human-drivers, who aren't actually all that great".

From a public confidence perspective, it doesn't matter if a self-driving car crashes one tenth, one one-hundredth as often as human drivers; as soon as you see a self-driving car kill someone in a situation that a human driver obviously would have avoided (like in the adversarial image kind of scenario), you've totally destroyed any and all confidence in this car's driving ability, because "I would never, ever have crashed there."


I think the main thing they miss is that human drivers are actually amazingly good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: