Command Palette
Search for a command to run...
Qingwen Zhang; Yi Yang; Peizheng Li; Olov Andersson; Patric Jensfelt

Abstract
Scene flow estimation predicts the 3D motion at each point in successive LiDAR scans. This detailed, point-level, information can help autonomous vehicles to accurately predict and understand dynamic changes in their surroundings. Current state-of-the-art methods require annotated data to train scene flow networks and the expense of labeling inherently limits their scalability. Self-supervised approaches can overcome the above limitations, yet face two principal challenges that hinder optimal performance: point distribution imbalance and disregard for object-level motion constraints. In this paper, we propose SeFlow, a self-supervised method that integrates efficient dynamic classification into a learning-based scene flow pipeline. We demonstrate that classifying static and dynamic points helps design targeted objective functions for different motion patterns. We also emphasize the importance of internal cluster consistency and correct object point association to refine the scene flow estimation, in particular on object details. Our real-time capable method achieves state-of-the-art performance on the self-supervised scene flow task on Argoverse 2 and Waymo datasets. The code is open-sourced at https://github.com/KTH-RPL/SeFlow along with trained model weights.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| scene-flow-estimation-on-argoverse-2 | SeFlow | EPE 3-Way: 0.048590 EPE Background Static: 0.005990 EPE Foreground Dynamic: 0.121375 EPE Foreground Static: 0.018404 |
| self-supervised-scene-flow-estimation-on-1 | SeFlow | EPE 3-Way: 0.048590 EPE Background Static: 0.005990 EPE Foreground Dynamic: 0.121375 EPE Foreground Static: 0.018404 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.