MeMVSNet: Monocular Depth Enhanced Multi-view Reconstruction

Read the full article See related articles

Discuss this preprint

Start a discussion What are Sciety discussions?

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

Multi-view stereo networks typically build a multi-view cost volume to regress the depth values. Previous methods introduce finely designed network structures for feature extraction to promote the quality of cost volume. However extracted features can be indistinguishable in texture-less region, reflective surfaces etc. To this end, we propose MeMVSNet, a novel framework for fusing monocular depth values to enhance the feature extraction. In particular, we utilize a two-branch feature fusion network to extract the geometry clues from monocular depth predictions to enrich the information in image features. In order to eliminate the influence of scalar factor, monocular depth predictions are normalized first. The proposed method achieves competitive performance on DTU and Tanks an Temples(T\&T). Qualitative evaluation demonstrates that our method is more robust in challenging scenes.

Article activity feed