Mono2VR: Exploring Immersive Experiences of Monocular Videos

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Despite the growing popularity of VR headsets, most personal videos remain limited to flat 2D displays and lack the depth and motion cues needed for immersive playback. Converting monocular videos into 3D experiences suitable for VR remains a challenge due to the complexity and computational demands of existing solutions. We present Mono2VR, an approach that transforms standard monocular videos into immersive 3D content for VR headsets, with minimal processing time and modest hardware requirements. Unlike recent high-fidelity methods that are impractical for longer videos or real-time use, Mono2VR runs on consumer hardware in minutes per second of video. Our pipeline estimates camera parameters and depth maps to reconstruct both dynamic foreground and static background elements. The resulting 3D videos support stereoscopic playback and head-motion parallax, enhancing immersion. We evaluated Mono2VR both technically and in a user study, where participants rated our output on par with ground truth 3D content. These results highlight Mono2VR’s potential to make immersive video experiences accessible to a broad audience.

Article activity feed