Hacker Newsnew | past | comments | ask | show | jobs | submit | more llebttam's commentslogin

Lots of things! Since 3D convolutional networks are very limited in their maximum resolution, most of the interesting things you can do involve learning on RGB+D images via a 2D CNN. A lot of tasks on images (segmentation, identification etc) are easier when you have even partial depth data as input.


If you didn't see the link to the paper, it's here: https://arxiv.org/pdf/1709.06158.pdf A wide range of usecases are discussed there.


just got one (or rather, we got a "come in for an interview" email)


I've done a lot of work in this area, and I can say that this is significantly faster and higher quality than the other Kinect-based 3D reconstruction techniques out there such as RGBDemo ( http://www.youtube.com/watch?v=Cldf7UdFq1k ).

It's also clear just how much of an advantage having a 3D sensor is for reconstruction when you compare this against 2D-camera-based 3D reconstruction software like Photofly.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: