Command Palette
Search for a command to run...
CORAL: Colored structural representation for bi-modal place recognition
Yiyuan Pan Xuecheng Xu Weijie Li Yunxiang Cui Yue Wang Rong Xiong

Abstract
Place recognition is indispensable for a drift-free localization system. Due to the variations of the environment, place recognition using single-modality has limitations. In this paper, we propose a bi-modal place recognition method, which can extract a compound global descriptor from the two modalities, vision and LiDAR. Specifically, we first build the elevation image generated from 3D points as a structural representation. Then, we derive the correspondences between 3D points and image pixels that are further used in merging the pixel-wise visual features into the elevation map grids. In this way, we fuse the structural features and visual features in the consistent bird-eye view frame, yielding a semantic representation, namely CORAL. And the whole network is called CORAL-VLAD. Comparisons on the Oxford RobotCar show that CORAL-VLAD has superior performance against other state-of-the-art methods. We also demonstrate that our network can be generalized to other scenes and sensor configurations on cross-city datasets.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| visual-place-recognition-on-oxford-robotcar-1 | CORAL | recall@top1: 88.93 recall@top1%: 96.13 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.