I’ll try to clarify. In my opinion, the weakness of neural systems is their inability to deal with input for which they have comparatively little or no training. There’s no way around that, except by introducing structures or systems outside the neural nets themselves that provide logical frameworks for dealing with rare events. (And I don’t just mean a bunch of logic in code. Expert systems are one potential approach, for example.)
Your point was that mosern autonomous driving systems can drastically reduce fatalities over human drivers—-and I agree, but only for circumstances for which the car’s neural systems have been well-trained. But the systems ought to be able to handle ~99.99% of the types of circumstances gracefully before most of us will trust them to safely drive us around.
Your point was that mosern autonomous driving systems can drastically reduce fatalities over human drivers—-and I agree, but only for circumstances for which the car’s neural systems have been well-trained. But the systems ought to be able to handle ~99.99% of the types of circumstances gracefully before most of us will trust them to safely drive us around.