Command Palette
Search for a command to run...
Laurenz Reichardt Patrick Mangat Oliver Wasenmüller

Abstract
LiDAR depth maps provide environmental guidance in a variety of applications. However, such depth maps are typically sparse and insufficient for complex tasks such as autonomous navigation. State of the art methods use image guided neural networks for dense depth completion. We develop a guided convolutional neural network focusing on gathering dense and valid information from sparse depth maps. To this end, we introduce a novel layer with spatially variant and content-depended dilation to include additional data from sparse input. Furthermore, we propose a sparsity invariant residual bottleneck block. We evaluate our Dense Validity Mask Network (DVMN) on the KITTI depth completion benchmark and achieve state of the art results. At the time of submission, our network is the leading method using sparsity invariant convolution.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| depth-completion-on-kitti-depth-completion | DVMN | MAE: 220.37 RMSE: 776.31 iMAE: 0.94 iRMSE: 2.21 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.