Command Palette
Search for a command to run...
Xueqian Li; Jianqiao Zheng; Francesco Ferroni; Jhony Kaesemodel Pontes; Simon Lucey

Abstract
Neural Scene Flow Prior (NSFP) is of significant interest to the vision community due to its inherent robustness to out-of-distribution (OOD) effects and its ability to deal with dense lidar points. The approach utilizes a coordinate neural network to estimate scene flow at runtime, without any training. However, it is up to 100 times slower than current state-of-the-art learning methods. In other applications such as image, video, and radiance function reconstruction innovations in speeding up the runtime performance of coordinate networks have centered upon architectural changes. In this paper, we demonstrate that scene flow is different -- with the dominant computational bottleneck stemming from the loss function itself (i.e., Chamfer distance). Further, we rediscover the distance transform (DT) as an efficient, correspondence-free loss function that dramatically speeds up the runtime optimization. Our fast neural scene flow (FNSF) approach reports for the first time real-time performance comparable to learning methods, without any training or OOD bias on two of the largest open autonomous driving (AV) lidar datasets Waymo Open and Argoverse.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| scene-flow-estimation-on-argoverse-2 | FastNSF | EPE 3-Way: 0.111820 EPE Background Static: 0.090712 EPE Foreground Dynamic: 0.163388 EPE Foreground Static: 0.081360 |
| self-supervised-scene-flow-estimation-on-1 | FastNSF | EPE 3-Way: 0.111820 EPE Background Static: 0.090712 EPE Foreground Dynamic: 0.115796 EPE Foreground Static: 0.031576 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.