Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> which they get from cameras like one can get it from lidar

LiDAR directly measures the distance to objects. What Tesla is doing is inferring it from two cameras.

There has been plenty of research to date [1] that LiDAR + Vision is significantly better than Vision Only especially under edge case conditions e.g. night, inclement weather when determining object bounding boxes.

[1] https://iopscience.iop.org/article/10.1088/1742-6596/2093/1/...



"What Tesla is doing is inferring it from two cameras."

People keep repeating this. I seriously don't know why. Stereo vision gives pretty crappy depth, ask anyone who has been playing around with disparity mapping.

Modern machine vision requires just one camera for depth. Especially if that one camera is moving. We humans have no trouble inferring depth with just one eye.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: