Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I take their point as a problem of control. Picking up objects with hand tracking is, in my experience, much less deterministic, and much less useful, compared to picking them up with a button and haptics.


Everyone that has tried the Vision Pro has said the eye and hand tracking is flawless.

Definitely agree with haptics which many have mentioned is an issue.


> Everyone that has tried the Vision Pro has said the eye and hand tracking is flawless.

Context of the interaction is important. I've seen this mentioned with selecting, but not with doing something like picking up a 3d object. I don't recall seeing this use in any of the release footage.


There are plenty of WWDC videos around with specific details on Hand Tracking e.g.

https://youtu.be/zNFpAQb9hAg?t=908


That's an impressive set of alleged joint-tracking at https://youtu.be/zNFpAQb9hAg?t=985

26 joints per hand. Assuming that all the joints in your hand are visible to the device (this seems to be unlikely for much of the time).

As per parent's line of questioning, it doesn't address how that maps to a manipulation in the virtual realm in practice.

In comparison, each controller on my Valve Index has 87 sensors - that can distinguish touch vs actual press (triggers etc), pressure, presence vs absence (fingers on the handle), as well as the usual orientation / accelerometer sensors. Even with an abundance of processing power, a camera-tracking system can't get there.


It's notable that despite the claimed joint tracking, the only actual hand interaction shown in that video is knocking something over, not any real dextrous handling requiring precise finger motion. I get the impression they made a deliberate decision not to include any 3d handling in the launch demos because it's just not good, whereas 'look and click/swipe' works well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: