* We made an open source C++ NeRF volumetric video engine for VR.
* It trains a NeRF for each frame of video, then compresses it to our format 'ldi3', which enables real-time playback on Quest or Vision Pro in WebXR, Unity or Unreal (open source players are available for all of these).
* We provide a paper explaining how to bake NeRFs into layered depth images for real time rendering.
* This is a commercial product. It is free to try, but requires a license to render the final results as a video output.
* It can output 2D videos where a virtual camera flys along a smooth path, or VR180 and Looking Glass portrait holograms.
* If I'm not mistaken this is the first NeRF-based VR stitching engine.
* Unlike NeRF Studio, Volurama does not require command line tools to install.
* Unlike Luma.ai, Volurama is a Windows/Mac application, which means you process your data locally rather than uploading it to the cloud.
* This is a custom/from scratch NeRF and structure-from-motion engine written in C++. It doesn't depend on COLMAP or NVidia's tinycudann. It incorporates ideas from several recent publications, as well as a few proprietary tricks developed at Lifecast.
* This is a 1.0 alpha release, and there are bound to be some bugs.
Today Lifecast unveils text-to-full 3D immersive environments that can be viewed in VR (e.g., Quest 2) or on 2D screens. We are doing this with a combination of Stable Diffusion and several other neural nets to make it 3D, combined with Lifecast's format for 6DOF VR photos and video. It's free to try and we do the processing in the cloud. Check it out and tell us what you think! This is version 1.0 and we are iterating quickly, so expect improvements in the future.
What’s stopping you from offering a mobile stereoscopic view? There’s likely more Google Cardboard users out there than active Horizons users at this point.
Artifacts at the edges are due to occlusions. An occlusion is a part of the scene which wasn't visible to the original camera. You see these if you move far from the original camera in VR to look behind something. This is a really hard problem for 6DOF. We've been improving the quality of occlusions over time, e.g.:
* v1: https://lifecastvr.com/demo_maui.html
* v2: https://lifecastvr.com/kalalea_fire.html
* v3: https://lifecastvr.com/hubner4.html
Version 3 now uses a 2-layer representation which has an image+depthmap for the background layer, which is drawn to fill in the occlusions. This background layer can be precomputed in a variety of different ways. For example, here is a CGI synthetic scene where we can construct the background layer perfectly:
https://lifecastvr.com/liferay.html
However, making up the background layer for real-world content is more challenging. We are on version 1 of that. We will improve this with machine learning in a future release. We can also substitute a "plate" 3d scene for the background in cases where the camera doesn't move. We have also experimented with using data from other frames when the camera moves. This will improve over time.
When moving onto multiple (depth) camera setups, in-painting from old frames worked really well, even before any masking off of static vs moving content (done in realtime, for live streaming)