HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion

Kim Aleksandr ; Ošep Aljoša ; Leal-Taixé Laura

EagerMOT: 3D Multi-Object Tracking via Sensor Fusion

Abstract

Multi-object tracking (MOT) enables mobile robots to perform well-informedmotion planning and navigation by localizing surrounding objects in 3D spaceand time. Existing methods rely on depth sensors (e.g., LiDAR) to detect andtrack targets in 3D space, but only up to a limited sensing range due to thesparsity of the signal. On the other hand, cameras provide a dense and richvisual signal that helps to localize even distant objects, but only in theimage domain. In this paper, we propose EagerMOT, a simple tracking formulationthat eagerly integrates all available object observations from both sensormodalities to obtain a well-informed interpretation of the scene dynamics.Using images, we can identify distant incoming objects, while depth estimatesallow for precise trajectory localization as soon as objects are within thedepth-sensing range. With EagerMOT, we achieve state-of-the-art results acrossseveral MOT tasks on the KITTI and NuScenes datasets. Our code is available athttps://github.com/aleksandrkim61/EagerMOT.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
3d-multi-object-tracking-on-nuscenesEagerMOT
AMOTA: 0.68
MOTA: 0.57
Recall: 0.73
3d-multi-object-tracking-on-nuscenesPolarMOT
AMOTA: 0.66
multi-object-tracking-and-segmentation-on-1EagerMOT
AssA: 73.75
DetA: 76.11
HOTA: 74.66
multiple-object-tracking-on-kitti-test-onlineEagerMOT
HOTA: 74.39
IDSW: 239
MOTA: 87.82

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp