InstantVR: Converting Panoramic Image Sequences into Walkable 6-DoF 3D Scenes

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The demand for photorealistic Virtual Reality (VR) content is outpacing the ability of creators to model environments manually. While consumer 360-degree cameras are ubiquitous, they traditionally offer only 3-Degrees-of-Freedom (3-DoF) experiences, where users can look around but cannot physically move through the space. This restriction breaks immersion and frequently induces vestibular mismatch (motion sickness). In this paper, we propose InstantVR, a novel pipeline that automatically converts a sparse sequence of panoramic (equirectangular) images into a fully volumetric, walkable 6-DoF environment. We leverage 3D Gaussian Splatting (3DGS) adapted for spherical projection models to reconstruct high-fidelity scenes in minutes. Furthermore, we introduce a density-based Navigability Analysis module that automatically extracts a collision mesh and floor plan from the reconstructed point cloud, allowing users to physically walk within the generated scene without passing through virtual geometry. Experimental results demonstrate that InstantVR renders at >98 FPS per eye on consumer VR hardware, significantly outperforming NeRF-based alternatives in both training speed (8 mins vs. 4 hours) and rendering latency.

Article activity feed