Command Palette
Search for a command to run...
Only the Curve Shape Matters: Training Foundation Models for Zero-Shot Multivariate Time Series Forecasting through Next Curve Shape Prediction
Cheng Feng Long Huang Denis Krompass

Abstract
We present General Time Transformer (GTT), an encoder-only style foundation model for zero-shot multivariate time series forecasting. GTT is pretrained on a large dataset of 200M high-quality time series samples spanning diverse domains. In our proposed framework, the task of multivariate time series forecasting is formulated as a channel-wise next curve shape prediction problem, where each time series sample is represented as a sequence of non-overlapping curve shapes with a unified numerical magnitude. GTT is trained to predict the next curve shape based on a window of past curve shapes in a channel-wise manner. Experimental results demonstrate that GTT exhibits superior zero-shot multivariate forecasting capabilities on unseen time series datasets, even surpassing state-of-the-art supervised baselines. Additionally, we investigate the impact of varying GTT model parameters and training dataset scales, observing that the scaling law also holds in the context of zero-shot multivariate time series forecasting.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| time-series-forecasting-on-etth1-336-1 | GTT-Large | MAE: 0.419 MSE: 0.424 |
| time-series-forecasting-on-etth1-336-1 | GTT-Smal | MAE: 0.427 MSE: 0.459 |
| time-series-forecasting-on-etth1-336-1 | GTT-Large(50M traing samples) | MAE: 0.444 MSE: 0.475 |
| time-series-forecasting-on-etth1-336-1 | GTT-Large(100M traing samples) | MAE: 0.432 MSE: 0.468 |
| time-series-forecasting-on-etth1-336-1 | GTT-Large(Fine-tune) | MAE: 0.418 MSE: 0.433 |
| time-series-forecasting-on-etth1-336-1 | GTT-Tiny | MAE: 0.436 MSE: 0.466 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.