A popular take in autonomous driving is the thing preventing Tesla from breaking beyond level two autonomous driving is its aversion to lidar, which is a direct result of its nn preference.
I’m confident that neural networks can process LiDAR data just as they can process camera data. I believe Musk drew a hard line on LiDAR for cost reasons: Tesla is absolutely miserly with the build.
Absense of lidar is just a symptom. Tesla only recently started to work with 3d model (which they get from cameras like one can get it from lidar) It just that the people who use lidar usually work with 3d model from the beginning.
> which they get from cameras like one can get it from lidar
LiDAR directly measures the distance to objects. What Tesla is doing is inferring it from two cameras.
There has been plenty of research to date [1] that LiDAR + Vision is significantly better than Vision Only especially under edge case conditions e.g. night, inclement weather when determining object bounding boxes.
"What Tesla is doing is inferring it from two cameras."
People keep repeating this. I seriously don't know why. Stereo vision gives pretty crappy depth, ask anyone who has been playing around with disparity mapping.
Modern machine vision requires just one camera for depth. Especially if that one camera is moving. We humans have no trouble inferring depth with just one eye.
(Eg Mercedes has achieved level 3 already).