HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

SPIdepth: Strengthened Pose Information for Self-supervised Monocular Depth Estimation

Mykola Lavreniuk

SPIdepth: Strengthened Pose Information for Self-supervised Monocular Depth Estimation

Abstract

Self-supervised monocular depth estimation has garnered considerable attention for its applications in autonomous driving and robotics. While recent methods have made strides in leveraging techniques like the Self Query Layer (SQL) to infer depth from motion, they often overlook the potential of strengthening pose information. In this paper, we introduce SPIdepth, a novel approach that prioritizes enhancing the pose network for improved depth estimation. Building upon the foundation laid by SQL, SPIdepth emphasizes the importance of pose information in capturing fine-grained scene structures. By enhancing the pose network's capabilities, SPIdepth achieves remarkable advancements in scene understanding and depth estimation. Experimental results on benchmark datasets such as KITTI, Cityscapes, and Make3D showcase SPIdepth's state-of-the-art performance, surpassing previous methods by significant margins. Specifically, SPIdepth tops the self-supervised KITTI benchmark. Additionally, SPIdepth achieves the lowest AbsRel (0.029), SqRel (0.069), and RMSE (1.394) on KITTI, establishing new state-of-the-art results. On Cityscapes, SPIdepth shows improvements over SQLdepth of 21.7% in AbsRel, 36.8% in SqRel, and 16.5% in RMSE, even without using motion masks. On Make3D, SPIdepth in zero-shot outperforms all other models. Remarkably, SPIdepth achieves these results using only a single image for inference, surpassing even methods that utilize video sequences for inference, thus demonstrating its efficacy and efficiency in real-world applications. Our approach represents a significant leap forward in self-supervised monocular depth estimation, underscoring the importance of strengthening pose information for advancing scene understanding in real-world applications. The code and pre-trained models are publicly available at https://github.com/Lavreniuk/SPIdepth.

Code Repositories

Lavreniuk/SPIdepth
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
monocular-depth-estimation-on-kitti-eigenSPIDepth
Delta u003c 1.25: 0.99
Delta u003c 1.25^2: 0.999
Delta u003c 1.25^3: 1.000
RMSE: 1.394
RMSE log: 0.048
Sq Rel: 0.069
absolute relative error: 0.029
monocular-depth-estimation-on-kitti-eigen-1SPIDepth(MS+1024x320)
Delta u003c 1.25: 0.94
Delta u003c 1.25^2: 0.973
Delta u003c 1.25^3: 0.985
Mono: X
RMSE: 3.662
RMSE log: 0.153
Resolution: 1024x320
Sq Rel: 0.531
absolute relative error: 0.071
monocular-depth-estimation-on-make3dSPIDepth
Abs Rel: 0.299
RMSE: 6.672
Sq Rel: 1.931

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp