I'm going to guess this is because the image to depth data, while good, is not perfectly accurate and therefore cannot be a shared ground truth between multiple images. At that point what you want is a more traditional structure from motion workflow, which already exists and does a decent job.
I think SideFX was the first to do that with Houdini. It's one of my favourite micro-UX features of high-end graphics software, coming in at a close second to Nuke's use of both linear and non-linear scales for slider values.
Yes, and when working with footage shot with anamorphic lenses one will have to render the footage as non-square pixels, mapped to the square pixels of our screens, to view it at its intended aspect ratio. This process is done either at the beginning (conforming the footage before sending to editorial / VFX) or end (conforming to square pixels as a final step) of the post-production workflow depending on the show.
What a bummer. It seems like what they're asking for here (a written agreement that users will be able to access 3rd party app stores) would be a win win win for Core Devices, Rebble, and users. Core Devices gets to look like a super good guy (ideally driving interest in the product), Rebble gets to look like a huge winner maintaining something for the community (as they are), and users get an open ecosystem.
There's still a chance for a win here, but looks like the door is closing.
You can see everything in your field of vision, but the area DIRECTLY in the centre has the highest level of detail. This image has high frequency animated details that are not cognisized equally by your entire FOV. The animated bit right in the middle at any given time is where your brain processes the most detail and also where you are looking.
I had to think about it, but are you saying all the stars are animated to rotate, but the amount they move between frames is too small for you to see unless it's in your fovea?
They're just so small that you only see shapeless blur outside your fovea. If you applied an artificial blur filter to the whole screen, you'd also not see any movement anymore because all high-resolution detail is removed. A 3x3 box blur will erase differences between
Generally yes, but we're still working on it all these years later! This article by Chris Brejon offers a very in-depth look into the differences brought about by different display transforms: https://chrisbrejon.com/articles/ocio-display-transforms-and...
The "best" right now, in my opinion, is AgX, which at this point has various "flavours" that operate slightly differently. You can find a nice comparison of OCIO configs here: https://liamcollod.xyz/picture-lab-lxm/CAlc-D8T-dragon
Most of Blender's icons are actually made in Penpot which is also what the Blender foundation uses for UI prototyping. The brush icons are made in Blender though!
reply