Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The state of the art is discouraging for the money and years poured into it.

Recognizing solid obstacles is still unreliable. On the one side, there's Tesla running into stationary objects multiple times, and Uber running down a pedestrian. On the other side, there's a false alarm rate which causes sudden stops.

Low-speed self-driving vehicles ought to work reliably by now, but don't. Google's cute little bubble car, top speed 25MPH, was discontinued. Voyage has some cars in a retirement community. Some, as in 3. With safety drivers.[1] Local Motors has been issuing press releases for years, but not much is on the road. There are some self-driving shuttle buses, but they all have "safety drivers". EasyMile has real autonomous shuttle buses, but they had to drop the speed to about 10 MPH.

Worse, all these systems have a huge engineer to passenger ratio. Nothing is close to being financially realistic. That's not a permanent problem; in the early days of the Internet, it was said that the ratio of PhDs to packets was too high. But this is a long way from profitability.

It shouldn't be this bad.

[1] https://www.villages-news.com/2019/01/31/villager-treated-to...



The Uber did recognise Elaine Herzberg as a pedestrian in the last 2 seconds before hitting her, after failing to do so for the 4 previous seconds [1]. It could have activated its breaks and she may have had a chance to survive (though probably not unscathed).

However, the car's auto-breaking decision had been disabled because it was considered too conservative. So the only agent who could have reacted in time was the woman driving the car, who, as we know, was on her phone.

Much as I find the hype around self-driving cars brain-dead, in this case, the car's AI was not at fault. Even if it could have made the decision to stop in time, the agency to act upon this decision was removed from it.

_____________

https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg#Cause...


> Much as I find the hype around self-driving cars brain-dead, in this case, the car's AI was not at fault. Even if it could have made the decision to stop in time, the agency to act upon this decision was removed from it.

The agency to react was removed because its reactions are crap. If the AI is braking due to false positives all the time, and the the only way to fix it is to disable its ability to react, then I would say that indeed, it is the car’s AI at fault, albeit indirectly.


Like I say, _in this case_ the car's AI was not at fault.

I don't know how good or bad is Uber's car AI. If by "crap" you mean that image recognition in general is brittle when exposed to real world conditions, as opposed to the controlled experimental conditions in published results, then I agree.


It's been the same conversation for 10 years! Here's a comment I made 4 years ago https://news.ycombinator.com/item?id=10133049

It still takes a team of highly paid engineers to build and maintain an authentication page. Seriously - I guarantee you there is a 100+ person team at Google that handles login. Self driving cars are a very, very long way away. The problem space has unquantifiable and insurmountable complexity.


At the same time, we're able to launch huge rockets into space with people in them. We build massive buildings that withstand 9.0 earthquakes. We build and maintain massive systems of roads, the power grid, deliver clean water to every home in the country. Its a problem of resources and infrastructure - we could rebuild every highway with a self-driving lane that'd be much safer than not.

You're right - the distance to being able to navigate just about any area whether built for it or not is a long way away, and the complexity may end up insurmountable. But we have the tech to move us incrementally towards that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: