Command Palette
Search for a command to run...
Tianwei Shen; Zixin Luo; Lei Zhou; Hanyu Deng; Runze Zhang; Tian Fang; Long Quan

Abstract
Accurate relative pose is one of the key components in visual odometry (VO) and simultaneous localization and mapping (SLAM). Recently, the self-supervised learning framework that jointly optimizes the relative pose and target image depth has attracted the attention of the community. Previous works rely on the photometric error generated from depths and poses between adjacent frames, which contains large systematic error under realistic scenes due to reflective surfaces and occlusions. In this paper, we bridge the gap between geometric loss and photometric loss by introducing the matching loss constrained by epipolar geometry in a self-supervised framework. Evaluated on the KITTI dataset, our method outperforms the state-of-the-art unsupervised ego-motion estimation methods by a large margin. The code and data are available at https://github.com/hlzz/DeepMatchVO.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| camera-pose-estimation-on-kitti-odometry | DeepMatchVO | Absolute Trajectory Error [m]: 25.76 Average Rotational Error er[%]: 4.85 Average Translational Error et[%]: 11.05 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.