AAGS: Appearance-Aware 3D Gaussian Splatting with Unconstrained Photo Collections

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Reconstructing 3D scenes from unconstrained collections of in-the-wild photographs has consistently been a challenging problem. The main difficulty lies in different appearance conditions and transient occluders of uncontrolled image samples. With the advancement of Neural Radiance Fields (NeRF), previous works have developed some effective strategies to tackle this issue. However, limited by deep networks and volumetric rendering techniques, these methods generally require substantial time costs. Recently, the advent of 3D Gaussian Splatting (3DGS) has significantly accelerated the training and rendering speed of 3D reconstruction tasks. Nevertheless, vanilla 3DGS struggles to distinguish varying appearances of in-the-wild photo collections. To address the aforementioned problems, we propose Appearance-Aware 3D Gaussian Splatting (AAGS), a novel extension of 3DGS to unconstrained photo collections. Specifically, we employ an appearance extractor to capture global features for image samples, enabling the distinction of visual conditions, \eg, illumination and weather, across different observations. Furthermore, to mitigate the impact of transient occluders, we design a transient-removal module that adaptively learns a 2D visibility map to decompose the static target from complex real-world scenes. Extensive experiments are conducted to validate the effectiveness and superiority of our AAGS. Compared with previous works, our method not only achieves better reconstruction and rendering quality, but also significantly reduces both training and rendering overhead. Code will be released at https://github.com/Zhang-WenCong/AAGS.

Article activity feed