Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI isn’t magic. If there isn’t enough information in the inputs, you can’t expect reliable results. It’s the same principle in all of software: garbage in, garbage out.

If there simply isn’t enough visual information, vision-only will fail.

https://youtu.be/IQJL3htsDyQ




That is not a debunking... That's someone running a similar experiment and getting a different result. That would debunk the claim that Teslas can never detect a painted wall. It does not debunk the claim that Teslas will sometimes fail to detect a painted wall.

And in a safety-critical system, the distinction is not mere pedantry.


I mean he didn't even use FSD.


Theoretically, if there's not enough visual information for AI drivers, then there's not enough visual information for human drivers, and that's a problem with the road. (Which, to be sure, occasionally there are roads like this: e.g. merging onto a higher-speed thoroughfare from a lower level, with a very short distance between "where you're in a position to see the merging traffic (and not that much of it)" and "where the roads have fully merged (and there's no shoulder)".)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: