Command Palette
Search for a command to run...
Qingwen Zhang; Yi Yang; Heng Fang; Ruoyu Geng; Patric Jensfelt

Abstract
Scene flow estimation determines a scene's 3D motion field, by predicting the motion of points in the scene, especially for aiding tasks in autonomous driving. Many networks with large-scale point clouds as input use voxelization to create a pseudo-image for real-time running. However, the voxelization process often results in the loss of point-specific features. This gives rise to a challenge in recovering those features for scene flow tasks. Our paper introduces DeFlow which enables a transition from voxel-based features to point features using Gated Recurrent Unit (GRU) refinement. To further enhance scene flow estimation performance, we formulate a novel loss function that accounts for the data imbalance between static and dynamic points. Evaluations on the Argoverse 2 scene flow task reveal that DeFlow achieves state-of-the-art results on large-scale point cloud data, demonstrating that our network has better performance and efficiency compared to others. The code is open-sourced at https://github.com/KTH-RPL/deflow.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| scene-flow-estimation-on-argoverse-2 | DeFlow | EPE 3-Way: 0.034295 EPE Background Static: 0.004561 EPE Foreground Dynamic: 0.073231 EPE Foreground Static: 0.025093 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.