Command Palette
Search for a command to run...
Transformer-Based Attention Networks for Continuous Pixel-Wise Prediction
Guanglei Yang Hao Tang Mingli Ding Nicu Sebe Elisa Ricci

Abstract
While convolutional neural networks have shown a tremendous impact on various computer vision tasks, they generally demonstrate limitations in explicitly modeling long-range dependencies due to the intrinsic locality of the convolution operation. Initially designed for natural language processing tasks, Transformers have emerged as alternative architectures with innate global self-attention mechanisms to capture long-range dependencies. In this paper, we propose TransDepth, an architecture that benefits from both convolutional neural networks and transformers. To avoid the network losing its ability to capture local-level details due to the adoption of transformers, we propose a novel decoder that employs attention mechanisms based on gates. Notably, this is the first paper that applies transformers to pixel-wise prediction problems involving continuous labels (i.e., monocular depth prediction and surface normal estimation). Extensive experiments demonstrate that the proposed TransDepth achieves state-of-the-art performance on three challenging datasets. Our code is available at: https://github.com/ygjwd12345/TransDepth.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| depth-estimation-on-nyu-depth-v2 | TransDepth (AGD+ ViT) | RMS: 0.365 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.