Command Palette
Search for a command to run...
Jingyang Zhang; Yao Yao; Shiwei Li; Zixin Luo; Tian Fang

Abstract
Learning-based multi-view stereo (MVS) methods have demonstrated promising results. However, very few existing networks explicitly take the pixel-wise visibility into consideration, resulting in erroneous cost aggregation from occluded pixels. In this paper, we explicitly infer and integrate the pixel-wise occlusion information in the MVS network via the matching uncertainty estimation. The pair-wise uncertainty map is jointly inferred with the pair-wise depth map, which is further used as weighting guidance during the multi-view cost volume fusion. As such, the adverse influence of occluded pixels is suppressed in the cost fusion. The proposed framework Vis-MVSNet significantly improves depth accuracies in the scenes with severe occlusion. Extensive experiments are performed on DTU, BlendedMVS, and Tanks and Temples datasets to justify the effectiveness of the proposed framework.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-reconstruction-on-dtu | Vis-MVSNet | Acc: 0.369 Comp: 0.361 Overall: 0.365 |
| point-clouds-on-dtu | Vis-MVSNet | Overall: 0.365 |
| point-clouds-on-tanks-and-temples | Vis-MVSNet | Mean F1 (Intermediate): 60.03 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.