Command Palette
Search for a command to run...
MonoViT: Self-Supervised Monocular Depth Estimation with a Vision Transformer
Chaoqiang Zhao Youmin Zhang Matteo Poggi Fabio Tosi Xianda Guo Zheng Zhu Guan Huang Yang Tang Stefano Mattoccia

Abstract
Self-supervised monocular depth estimation is an attractive solution that does not require hard-to-source depth labels for training. Convolutional neural networks (CNNs) have recently achieved great success in this task. However, their limited receptive field constrains existing network architectures to reason only locally, dampening the effectiveness of the self-supervised paradigm. In the light of the recent successes achieved by Vision Transformers (ViTs), we propose MonoViT, a brand-new framework combining the global reasoning enabled by ViT models with the flexibility of self-supervised monocular depth estimation. By combining plain convolutions with Transformer blocks, our model can reason locally and globally, yielding depth prediction at a higher level of detail and accuracy, allowing MonoViT to achieve state-of-the-art performance on the established KITTI dataset. Moreover, MonoViT proves its superior generalization capacities on other datasets such as Make3D and DrivingStereo.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| monocular-depth-estimation-on-kitti-2 | MonoViT | absolute relative error: 0.093 |
| monocular-depth-estimation-on-kitti-eigen-1 | MonoViT(MS+1024x320) | Delta u003c 1.25: 0.912 Delta u003c 1.25^2: 0.969 Delta u003c 1.25^3: 0.985 Mono: X RMSE: 4.202 RMSE log: 0.169 Resolution: 1024x320 Sq Rel: 0.671 absolute relative error: 0.093 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.