Towards AI-Powered Automatic 3D Scene Generation for the Metaverse: A Comparative Analysis of Manual and Photogrammetry Techniques

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

The development of the metaverse relies on the creation of realistic, immersive, and high-performance 3D scenes. For such a reason, manual construction of these scenes remains a time-consuming process that requires advanced technical skills, posing a significant barrier to widespread adoption. Recent advancements in AI-assisted photogrammetry tools offer an alternative approach by enabling semi-automatic 3D reconstruction from real-world environments. Nevertheless, the usability, quality, and real-time applicability of the resulting assets remain underexplored.This study aims to assess the effectiveness of photogrammetry-based tools for creating functional and visually appealing 3D scenes for metaverse applications.To this end, a comparative experiment was conducted by reconstructing three environments using manual modeling and two photogrammetry tools based on distinct technologies: Polycam (MVS-based) and LumaAI (NeRF-based). The resulting models were evaluated using quantitative metrics, including modeling time, polygon count, visual similarity (SSIM), real-time performance (GPU usage and FPS), and Hausdorff distance.The findings revealed that photogrammetry significantly accelerates the modeling process but does not consistently outperform manual modeling in terms of polygonal optimization and real-time rendering efficiency. Furthermore, the choice between photogrammetry tools depends on object characteristics and application constraints.This study provides practical insights and empirical guidelines for developers and researchers, highlighting the trade-offs between automation, performance, and visual fidelity in 3D scene generation for the metaverse.

Article activity feed