Command Palette
Search for a command to run...
SimAug: Learning Robust Representations from Simulation for Trajectory Prediction
{Junwei Liang Alexander Hauptmann Lu Jiang}

Abstract
This paper studies the problem of predicting future trajectories of people in unseen cameras of novel scenarios and views. We approach this problem through the real-data-free setting in which the model is trained only on 3D simulation data and applied out-of-the-box to a wide variety of real cameras. We propose a novel approach to learn robust representation through augmenting the simulation training data such that the representation can better generalize to unseen real-world test data. The key idea is to mix the feature of the hardest camera view with the adversarial feature of the original view. We refer to our method as $ extit{SimAug}$. We show that $ extit{SimAug}$ achieves promising results on three real-world benchmarks using zero real training data, and state-of-the-art performance in the Stanford Drone and the VIRAT/ActEV dataset when using in-domain training data. Code and models are released at https://next.cs.cmu.edu/simaug
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| trajectory-forecasting-on-actev | SimAug | ADE-8/12: 17.96 |
| trajectory-forecasting-on-stanford-drone | SimAug | ADE-8/12 @K = 20: 10.27 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.