Command Palette
Search for a command to run...
PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive Learning
Yunbo Wang Haixu Wu Jianjin Zhang Zhifeng Gao Jianmin Wang Philip S. Yu Mingsheng Long

Abstract
The predictive learning of spatiotemporal sequences aims to generate future images by learning from the historical context, where the visual dynamics are believed to have modular structures that can be learned with compositional subsystems. This paper models these structures by presenting PredRNN, a new recurrent network, in which a pair of memory cells are explicitly decoupled, operate in nearly independent transition manners, and finally form unified representations of the complex environment. Concretely, besides the original memory cell of LSTM, this network is featured by a zigzag memory flow that propagates in both bottom-up and top-down directions across all layers, enabling the learned visual dynamics at different levels of RNNs to communicate. It also leverages a memory decoupling loss to keep the memory cells from learning redundant features. We further propose a new curriculum learning strategy to force PredRNN to learn long-term dynamics from context frames, which can be generalized to most sequence-to-sequence models. We provide detailed ablation studies to verify the effectiveness of each component. Our approach is shown to obtain highly competitive results on five datasets for both action-free and action-conditioned predictive learning scenarios.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-prediction-on-kth | PredRNN-V2 | Cond: 10 LPIPS: 0.139 PSNR: 28.37 Pred: 20 SSIM: 0.839 |
| video-prediction-on-moving-mnist | PredRNN-V2 | LPIPS: 0.071 MSE: 48.4 SSIM: 0.891 |
| weather-forecasting-on-sevir | PredRNN | MSE: 3.9014 mCSI: 0.4080 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.