Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>One or two fixed position monochrome, low resolution forward facing cameras such as in a Tesla just won't compare.

I'm pretty sure humans could cope with that. https://www.youtube.com/watch?v=-CITIXlw_T4



I'm surprised that Tesla isn't doing stereo or trinocular (3 cameras) to get depth. It's just cameras, and it works reasonably well. You can use cameras the width of the windshield apart to get a wide baseline, which increases the useful range. But no. Although Tesla does have multiple cameras, they never mention stereo vision. Mobileye is depth from motion, and apparently, so is Tesla's in-house system.

(3-camera depth is more reliable than 2-camera. Many of the ambiguous situations for two cameras can be resolved with three. Especially if it's 3 cameras in a triangle, not a line.)


> Mobileye is depth from motion, and apparently, so is Tesla's in-house system.

Monocular depth perception w/out motion is a thing too, e.g. [1], however I doubt it is good enough for safety-critical systems like self-driving cars.

[1] https://github.com/mrharicot/monodepth




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: