Command Palette
Search for a command to run...
Xiaozhi Chen; Huimin Ma; Ji Wan; Bo Li; Tian Xia

Abstract
This paper aims at high-accuracy 3D object detection in autonomous driving scenario. We propose Multi-View 3D networks (MV3D), a sensory-fusion framework that takes both LIDAR point cloud and RGB images as input and predicts oriented 3D bounding boxes. We encode the sparse 3D point cloud with a compact multi-view representation. The network is composed of two subnetworks: one for 3D object proposal generation and another for multi-view feature fusion. The proposal network generates 3D candidate boxes efficiently from the bird's eye view representation of 3D point cloud. We design a deep fusion scheme to combine region-wise features from multiple views and enable interactions between intermediate layers of different paths. Experiments on the challenging KITTI benchmark show that our approach outperforms the state-of-the-art by around 25% and 30% AP on the tasks of 3D localization and 3D detection. In addition, for 2D detection, our approach obtains 10.3% higher AP than the state-of-the-art on the hard data among the LIDAR-based methods.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-object-detection-on-kitti-cars-easy-val | MV3D | AP: 71.29 |
| 3d-object-detection-on-kitti-cars-easy-val | MV3D (LiDAR) | AP: 71.19 |
| 3d-object-detection-on-kitti-cars-hard-val | MV3D | AP: 56.56 |
| 3d-object-detection-on-kitti-cars-moderate-1 | MV3D | AP: 62.68 |
| birds-eye-view-object-detection-on-kitti-cars-1 | MV (BV+FV) | AP: 86.18 |
| birds-eye-view-object-detection-on-kitti-cars-2 | MV (BV+FV) | AP: 77.32 |
| birds-eye-view-object-detection-on-kitti-cars-3 | MV (BV+FV) | AP: 76.33 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.