Buckets:
Title: Sundial: A Family of Highly Capable Time Series Foundation Models
URL Source: https://arxiv.org/html/2502.00816
Markdown Content: Abstract 1Introduction 2Related Work 3Preliminaries 4Approach 5Experiments 6Conclusion Sundial: A Family of Highly Capable Time Series Foundation Models Yong Liu Guo Qin Zhiyuan Shi Zhi Chen Caiyin Yang Xiangdong Huang Jianmin Wang Mingsheng Long Abstract
We introduce Sundial, a family of native, flexible, and scalable time series foundation models. To predict the next-patch’s distribution, we propose a TimeFlow Loss based on flow-matching, which facilitates native pre-training of Transformers on continuous-valued time series without discrete tokenization. Conditioned on arbitrary-length time series, our models are pre-trained without specifying any prior distribution and can generate multiple probable predictions, achieving more flexibility in representation learning than using parametric densities. Towards time series foundation models, we leverage minimal but crucial adaptations of Transformers and curate TimeBench with one trillion time points, comprising mostly real-world datasets and synthetic data. By mitigating mode collapse via TimeFlow Loss, we pre-train a family of Sundial models on TimeBench, which achieve unprecedented model capacity and generalization performance. In addition to excellent scalability, Sundial achieves state-of-the-art results on both point and probabilistic forecasting benchmarks with a just-in-time inference speed, i.e., making zero-shot predictions within a few milliseconds. We believe that Sundial’s pioneering generative forecasting capability can improve model reliability in real-world decision-making. Code is available at: https://github.com/thuml/Sundial.
time series, foundation models, pre-training, Transformers 1Introduction
Time series forecasting has fascinated people for thousands of years. Although people have been able to determine the time using instruments like sundials in 3000 BC, time series forecasting is intrinsically non-deterministic (box2015time). Therefore, generating a variety of probable predictions is crucial for decision-making. The growing demand has facilitated numerous statistical approaches over the past decades (hyndman2018forecasting; box2013box), which provide high-profile theories and probabilistic tools for making reliable schedules. Recent advancements bring the boom of deftly designed models that automatically learn intricate dynamics and correlations from raw data (oreshkin2019n; nie2022time; zhang2023crossformer; liu2023itransformer). Despite the impressive performance, deep models necessitate task-specific training on sufficient in-distribution data. Motivated by advances in large models (bommasani2021opportunities), pre-trained time series foundation models have shown promising capabilities in out-of-distribution tasks (das2023decoder; liutimer; woo2024unified; ansari2024chronos).
Figure 1:A native time series model operates on the original series of continuous values. A flexible foundation model is pre-trained without specifying prior distributions. Sundial is the first family of native and flexible time series foundation models.
Current research on time series foundation models has converged on building unified, scalable, and out-of-the-box forecasters, exhibiting zero-shot performance close to or sometimes surpassing supervised methods (aksugift). Notably, Transformers (radford2018improving) are currently the de facto architecture of these models. While pre-trained Transformers with an inherent generative ability have facilitated great success in language, image, and video generation (ramesh2021zero; openai2023gpt; liu2024sora), most time series foundation models are not “generative” or, more specifically, probabilistic forecasters, thereby limiting reliability in decision-making. Although parametric densities specified with prior distributions (wen2017multi; woo2024unified) can be adopted to address uncertainty in time series forecasting, they can reduce the capacity of distributions learned by pre-trained models, especially on time series modality characterized by high heterogeneity. To learn arbitrarily intricate distributions without mode collapse, language modeling (bengio2000neural) that learns the categorical distribution via cross-entropy loss inspires subsequent works (gruver2023large; ansari2024chronos), which treat time series as a foreign language using discrete tokenization. Still, discrepancies between continuous-valued time series and discrete language tokens can lead to out-of-vocabulary issues and coarse-grained prediction intervals.
As shown in Figure 1, Sundial is presented as the first family of generative models among time series foundation models. As foundation models intend to learn complicated distributions from extensive datasets and facilitate transferability across agnostic downstream datasets, we do not specify any prior parametric densities, such as unimodal and multimodal Gaussian mixtures. Instead, we delve into generative modeling to tame Transformers as native, flexible, and scalable time series foundation models. By comparing to denoising diffusion models (li2024autoregressive), we opt for a simple yet effective flow-matching framework (lipman2022flow), which provides notable efficiency and sample quality (tong2023improving). We propose TimeFlow Loss, a parameterized training objective (zhang2018unreasonable) for autoregressive models to learn and sample from each token’s predictive distribution. Optimizing models in the original continuous-valued domain, TimeFlow Loss facilitates patch-level generation and enables fast inference, which is naturally compatible with the time series modality.
In addition to TimeFlow, we enhance the Transformer with minimal but critical adaptations. We develop feasible patch tokenization for arbitrary-length input time series. We adopt RoPE (su2024roformer), Pre-LN (xiong2020layer), FlashAttention (dao2022flashattention), and KV Cache (pope2023efficiently), which are crucial but generally neglected in the development of time series foundation models. Besides, we pre-train our models by multi-patch prediction to reduce autoregression steps. We realize a rapid generation of multiple samples by reusing a shared lookback representation. Beyond facilitating scalable pre-training, these adaptations help real-time long-context inference and long-term generation.
To validate the scaling law of time series foundation models, we collect and curate TimeBench with an unprecedented volume of a trillion time points. We present Sundial as a family of highly capable foundation models, which achieve state-of-the-art on three large-scale and best-recognized benchmarks, including Time-Series-Library (TSLib) (wu2022timesnet), GIFT-Eval (aksugift), and FEV (ansari2024chronos). Our contributions lie in these aspects:
•
We propose TimeFlow Loss to predict next-patch’s distribution, allowing Transformers to be trained without discrete tokenization and make probable predictions.
•
We present Sundial, a family of scalable and efficient time series foundation models built upon our enhanced Transformer and pre-trained on a trillion time points.
•
Experimentally, Sundial achieves state-of-the-art zero-shot performance on point and probabilistic forecasting benchmarks, including TSLib, GIFT-Eval, and FEV, indicating a promising generative approach for the future improvement of time series foundation models.
2Related Work 2.1Time Series Forecasting
Forecasting is essential for decision-making. Advancements in deep learning for time series include theory-inspired deep modules (wu2021autoformer; liu2023koopa; wu2022timesnet), architecture-oriented adaptations (bai2018empirical; salinas2020deepar; lim2021temporal), and time series preprocessing (kim2021reversible; nie2022time). Deep models learn the dataset-level distribution and benefit from strong generalization and model capacity. Statistical methods conduct case-by-case fitting on input series, achieving notable performance on small data (ke2017lightgbm; hyndman2018forecasting).
One of the efforts towards more capable forecasters focuses on the foundation models (bommasani2021opportunities), which address data-scarce scenarios by pre-training. More capable models support zero-shot forecasting, making inferences as fast as statistical methods and possessing large model capacity as deep models. Another aspect is to address uncertainty in time series forecasting. There is a growing research emphasis on probabilistic forecasting (woo2024unified; ansari2024chronos). While parametric densities can be adopted as training objectives of probabilistic forecasting, they can be too specific to meet the heterogeneity of large-scale datasets, resulting in mode collapse in representation learning and over-smooth predictions (Figure 14-15). In this work, we introduce generative time series foundation models, which naturally address the uncertainty in forecasting.
2.2Time Series Foundation Models
Recent research has concentrated on building versatile large time series models (liang2024foundation). With the advances made in large language models, Transformer has become the dominant architecture. Several works adapt Transformers to address the unique 2D-dimensionality and heterogeneity of time series (woo2024unified; liu2024timer). Specifically, our work delves into tokenization and optimization. Models such as TimesFM (das2023decoder), Timer (liu2024timer; liutimer), and Time-MoE (shi2024time) embed continuous values and fit unimodal distributions via MSE or quantile loss (wen2017multi). However, prior loss may result in mode collapse because predictive distributions are highly divergent across different domains. Besides, these models cannot provide the confidence level of predictions, limiting reliability for decision-making. Based on continuous tokenization, Moirai (woo2024unified) presents a probabilistic model learning a mixture of distributions, but this prior can still fail to accommodate complex distributions. Inspired by language modeling, Chronos (ansari2024chronos) discretizes series via bucket quantization, learning more flexible categorical distributions by cross-entropy. Still, discrete tokenization is applied at each time point, which can lead to long contexts. Also, the final performance can be sensitive to quantization techniques. Unlike before, we tame Transformers as native time series foundation models, learning flexible distributions without discrete tokenization.
2.3Generative Modeling for Time Series
By addressing complicated distributions during pre-training, generative modeling has become a focal point in the development of various foundation models (zhao2023survey; liu2024sora). While this direction for time series mostly focused on time series generation (tashiro2021csdi) and task-specific forecasters (rasul2021autoregressive; shen2023non; kollovieh2024flow), generative modeling for time series foundation models is hardly explored. With the comparable flexibility in distribution learning as language modeling, diffusion denoising (sohl2015deep) and flow-matching (lipman2022flow) have gained increasing prevalence in continuous-valued modalities (lipman2024flow). Compared with diffusion denoising models, flow-matching provides a simple yet efficient framework. With fewer steps involved in the forward and reverse processes, large models based on flow-matching have shown superior performance in image generation (esser2024scaling).
Despite the connection in value continuity, generating images and future time series are fundamentally different tasks due to the autoregressive property of forecasting. Our proposed TimeFlow Loss is designed for autoregressive models to conduct conditional generation, which is a parameterized loss function (zhang2018unreasonable) for arbitrary distributions and enhances representation learning of foundation models.
3Preliminaries 3.1Flow-Matching
The goal of generative modeling is to learn the underlying probability distribution that generates the data. The framework of flow-matching transforms a sample 𝐱 0 ∼ 𝑝 0 drawn from a source distribution into a sample 𝐱 1 ∼ 𝑝 1 drawn from a target distribution. The transformation is continuous in time. For 𝑑 -dimensional distributions, it is defined by a time-dependent velocity field 𝑢 𝑡 : [ 0 , 1 ] × ℝ 𝑑 → ℝ 𝑑 , which is the solution of the ordinary differential equation (ODE):
d d 𝑡 𝜓 𝑡 ( 𝐱 )
𝑢 𝑡 ( 𝜓 𝑡 ( 𝐱 ) ) and 𝜓 0 ( 𝐱 )
𝐱 .
The velocity field 𝑢 𝑡 determines a flow 𝜓 𝑡 . For all 𝑡 ∈ [ 0 , 1 ] , 𝜓 𝑡 generates the probability path 𝑝 𝑡 that interpolates 𝑝 0 and 𝑝 1 , i.e., 𝐱 𝑡
𝜓 𝑡 ( 𝐱 0 ) ∼ 𝑝 𝑡 for 𝐱 0 ∼ 𝑝 0 . The implementation of flow-matching is to train a network 𝑢 𝑡 𝜃 parametrized by 𝜃 to fit the velocity field 𝑢 𝑡 , which is a regression-based task formulated as the Flow-Matching objective:
ℒ FM ( 𝜃 )
𝔼 𝑡 , 𝐱 𝑡 ‖ 𝑢 𝑡 𝜃 ( 𝐱 𝑡 ) − 𝑢 𝑡 ( 𝐱 𝑡 ) ‖ 2 .
Furthermore, lipman2022flow proved the equivalence of optimizing the Conditional Flow-Matching objective:
ℒ CFM ( 𝜃 )
𝔼 𝑡 , 𝐱 𝑡 , 𝐱 1 ∥ 𝑢 𝑡 𝜃 ( 𝐱 𝑡 ) − 𝑢 𝑡 ( 𝐱 𝑡 | 𝐱 1 ) ∥ 2 .
Leveraging the conditional optimal-transport (linear) path and a source Gaussian, the objective can be formulated as:
ℒ CFM Gauss ( 𝜃 )
𝔼 𝑡 , 𝜖 , 𝐱 1 ‖ 𝑢 𝑡 𝜃 ( 𝐱 𝑡 ) − ( 𝐱 1 − 𝐱 0 ) ‖ 2 .
(1)
where 𝑡 ∼ 𝒰 [ 0 , 1 ] , 𝐱 0 ∼ 𝒩 ( 0 , 1 ) and 𝐱 𝑡
𝑡 𝐱 1 + ( 1 − 𝑡 ) 𝜖 .
Consequently, we can train a generative network on given samples from the target distribution, and generate new samples by applying a push-forward process on samples drawn from a simple source Gaussian distribution:
𝐱 𝑡 + Δ 𝑡 − 𝐱 𝑡
𝑢 𝑡 𝜃 ( 𝐱 𝑡 ) Δ 𝑡 , 𝐱 0 ∼ 𝒩 ( 𝟎 , 𝐈 ) , 𝑡 ∈ [ 0 , 1 ] .
(2) Figure 2:Overall architecture of Sundial. The input time series is divided into patch tokens, which are embedded from original continuous values. The patch embeddings are fed into a decoder-only Transformer, a stable and speedup version that learns token representations via causal self-attention. The model is optimized using our TimeFlow Loss, a parameterized loss function that models per-token probability distribution conditioned on the learned representations, and generates multiple plausible predictions under the flow-matching framework. 3.2Generative Models for Probabilistic Forecasting
Given a historical observation 𝑥 1 : 𝑡
{ 𝑥 1 , … , 𝑥 𝑡 } , the target of time series forecasting is to predict future time series 𝑥 𝑡 + 1 : 𝑡 + 𝑓
{ 𝑥 𝑡 + 1 , … , 𝑥 𝑡 + 𝑓 } . The task can be generally formulated as 𝑝 ( 𝑥 𝑡 + 1 : 𝑡 + 𝑓 | 𝐡 𝑡 ) , where 𝐡 𝑡
𝑓 𝜙 ( 𝑥 1 : 𝑡 ) is the learned representation from a deep model 𝑓 𝜙 . In probabilistic forecasting, explicit optimization objectives are utilized to predict the statistics of future series, e.g., MSE or quantile loss, which have specified 𝑝 as a prior distribution. While using one parametric density generally fits well on a small amount of data, it can be the major bottleneck for scaling time series foundation models. Inspired by the success of large generative models (rombach2022high; openai2023gpt; esser2024scaling), we introduce generative modeling to realize probabilistic forecasting:
𝑝 𝜃 ( 𝑥 𝑡 + 1 : 𝑡 + 𝑓 | 𝐡 𝑡 )
𝑔 𝜃 ( 𝑓 𝜙 ( 𝑥 1 : 𝑡 ) ) .
(3)
𝑔 𝜃 is a small trainable generative network conditioned on the learned representations of 𝑓 𝜙 , which is jointly optimized with 𝑓 𝜙 . While the generative model automatically fits the target distribution, it can sample raw predictions and calculating their statistics for probabilistic forecasting. The aim is conceptually related to conformal prediction (vovk2005algorithmic) but models uncertainty beyond prediction intervals.
4Approach
In this work, we conduct a univariate pre-training paradigm, which adopts the S3 format proposed by liutimer to address multivariate data. To mitigate value range discrepancy, we conduct normalization on time series individually per variable. Afterwards, we sample varying-length training samples with the maximum context length of 2880 . As a foundation model, Sundial is required to predict on out-of-distribution series with varied lengths during inference.
4.1Sundial
As shown in Figure 2, the Sundial models consist of three parts: (1) time series tokenization, including a context-level re-normalization and a patch embedding that addresses any-length time series, (2) a Transformer backbone that learns the per-token representation of time series, and (3) TimeFlow Loss, a parameterized loss function to model the per-token distribution and generate raw series during inference. Intuitively, Sundial can be regarded as an ARMA (Auto-Regression and Moving-Average) deep model, i.e., Transformer learns token representations autoregressively. Conditioned on the lookback representations, TimeFlow transforms random noises into non-deterministic predictions.
4.1.1Time Series Tokenization Re-Normalization
We adopt stationarization (liu2022non), a non-parametric two-stage instance normalization conducted within each sample, which is initially proposed to mitigate non-stationarity of time series. Here, it helps to address temporal distribution shift and outlier ranges in input series, improving generalizability for zero-shot forecasting.
Patch Embedding
Given a univariate time series 𝐗
{ 𝑥 1 , … , 𝑥 𝑇 } , it is divided into patches 𝐱 𝑖
𝑥 1 + ( 𝑖 − 1 ) 𝑃 : 𝑖 𝑃 with the length of 𝑃 . To address non-divisible length, we pad the input at the beginning and use a binary mask 𝐦 𝑖 ∈ ℝ 𝑃 for each patch to indicate the padded position. It will lead to 𝑁
⌈ 𝑇 / 𝑃 ⌉ such input tokens. Subsequently, we use a shared MLP : ℝ 2 𝑃 ↦ ℝ 𝐷 to embed all patch tokens:
𝐡 𝑖
PatchEmbed ( Concat ( 𝐱 𝑖 , 𝐦 𝑖 ) ) ,
(4)
where 𝐡 𝑖 ∈ ℝ 𝐷 and 𝐷 is the dimension of token embedding. Unlike point-level quantization (ansari2024chronos), we reserve original values without discrete quantization. It also reduces the context length (in token) of the Transformer.
4.1.2Transformer Backbone
Given 𝑁 token embeddings { 𝐡 𝑖 } , we adopt several crucial adaptations on a decoder-only Transformer to obtain per-token representations aggregated from all previous tokens. First, we adapt Pre-LN (xiong2020layer) to improve pre-training stability. Second, we leverage a causal self-attention mechanism with RoPE (su2024roformer) that introduces the position information of patch tokens. It can be formulated as follows (the layer index is omitted for simplicity):
𝒜 𝑖 𝑗
𝐡 𝑖 ⊤ 𝐖 𝐪 𝐑 Θ , 𝑖 − 𝑗 𝐖 𝐤 ⊤ 𝐡 𝑗 ,
(5)
Attention ( 𝐇 )
Softmax ( Mask ( 𝒜 ) 𝑑 ) 𝐇𝐖 𝐯 ,
where 𝐖 𝐪 , 𝐖 𝐤 , 𝐖 𝐯 ∈ ℝ 𝐷 × 𝑑 project token embeddings 𝐇
{ 𝐡 𝑖 } into 𝑑 -dimensional queries, keys, and values. 𝐑 Θ , 𝑡 ∈ ℝ 𝑑 × 𝑑 is the rotary matrix with rotation degree ( 𝑡 ⋅ Θ ) . Lastly, we implement FlashAttention (dao2022flashattention) and KV Cache (pope2023efficiently), since these enhancements for deployment are increasingly emphasized in large foundation models (shoeybi2019megatron; rasley2020deepspeed).
4.1.3TimeFlow Loss
Given representations { 𝐡 𝑖 } extracted by the last layer of the Transformer, we aim to generate length- 𝐹 predictions 𝐲 ^ 𝑖
𝑥 ^ 1 + 𝑖 𝑃 , 𝐹 + 𝑖 𝑃 at each position 𝑖 via our autoregressive model. Motivated by the empirical observation that a larger patch size improves the performance in decoder-only Transformers (das2023decoder) while a small patch size can be more flexible to accommodate data of different frequencies, we adopt multi-patch predictions ( 𝐹
𝑃 ) for pre-training, which also reduces the steps of autoregressive inference.
Based on Equations 1 and 3, we formulate a new generative forecasting conditioned on a sequential representation 𝐡 𝑖 :
ℒ ( 𝜃 , 𝐡 𝑖 )
𝔼 𝑡 , 𝜖 , 𝐲 𝑖 ∥ 𝑢 𝑡 𝜃 ( 𝐲 𝑖 ( 𝑡 ) | 𝐡 𝑖 ) − ( 𝐲 𝑖 − 𝐲 𝑖 ( 0 ) ) ∥ 2 .
(6)
where 𝐲 𝑖 ∈ ℝ 𝐹 is the groundtruth value and 𝐲 𝑖 ( 0 ) is a 𝑑 -dimensional Gaussian noise, 𝑡 is sampled from 𝒰 [ 0 , 1 ] , and 𝐲 𝑖 ( 𝑡 )
𝑡 𝐲 𝑖 + ( 1 − 𝑡 ) 𝐲 𝑖 ( 0 ) is constructed by the conditional optimal-transport path. It is important to note that the conditional representation 𝐡 𝑖 differs from the conditional path and the conditional source distribution. Instead, 𝐡 𝑖 is a condition of position 𝑖 , also a time-invariant condition of the whole flow-matching process 𝑡 ∈ [ 0 , 1 ] . Technically, we implement the flow-matching network by a small MLP:
𝑢 𝑡 𝜃 ( 𝐲 𝑖 ( 𝑡 ) | 𝐡 𝑖 )
FM − Net ( 𝐲 𝑖 ( 𝑡 ) , 𝑡 , 𝐡 𝑖 ) .
(7)
The training process involves sampling the noised 𝐲 𝑖 ( 𝑡 ) , and jointly input it with 𝑡 . The condition 𝐡 𝑖 is integrated into the flow-matching network via AdaLN (peebles2023scalable). TimeFlow Loss for autoregressive models is formulated as:
ℒ TimeFlow
∑ 𝑖
1 𝑁 ‖ FM − Net ( 𝐲 𝑖 ( 𝑡 ) , 𝑡 , 𝐡 𝑖 ) − ( 𝐲 𝑖 − 𝐲 𝑖 ( 0 ) ) ‖ 2 .
(8) Inference
Based on Equation 2, the push-forward process conditioned on a learned representation 𝐡 𝑖 is formulated as
𝐲 𝑖 ( 𝑡 + Δ 𝑡 )
𝐲 𝑖 ( 𝑡 ) + 𝑢 𝑡 𝜃 ( 𝐲 𝑖 ( 𝑡 ) | 𝐡 𝑖 ) Δ 𝑡 .
(9)
Technically, we adopt a 𝐾 -step uniform trajectory, and set Δ 𝑡
1 / 𝐾 . The sampling is done via starting from an initial Gaussian noise and advancing with the velocity generated by the trained FM − Net iteratively, as shown in Algorithm 1.
Algorithm 1 TimeFlow Loss: Sampling 0: condition 𝐡 𝑖 ∈ ℝ 𝐷 , path steps 𝐾 . 1: Sample initial noise 𝐲 ^ 𝑖 ∼ 𝒩 ( 𝟎 , 𝐈 ) . 2: Δ 𝑡
1 / 𝐾 3: for 𝑘 in { 0 , 1 … , 𝐾 − 1 } do 4: for 𝐲 ^ 𝑖 ← 𝐲 ^ 𝑖 + FM − Net ( 𝐲 ^ 𝑖 , 𝑘 Δ 𝑡 , 𝐡 𝑖 ) Δ 𝑡 5: end for 6: Return: 𝐲 ^ 𝑖
This procedure generates a predicted sample 𝐲 ^ 𝑖 at position 𝑖 . To calibrate probabilistic forecasting results during inference, we repeat this procedure using different initial noises and estimate statistics such as the median and quantiles from a set of generated predictions. We implement an efficient repeated-sampling in the TimeFlow module. The condition (representation) of lookback series is shared and reused for different initial noises, thereby reducing the overhead of repeated forwarding in the Transformer backbone.
4.2TimeBench
We collected and curated TimeBench, which comprises over a trillion time points from various sources, as shown in Figure 3. Several datasets originate from research teams (woo2024unified; ansari2024chronos; liu2024timer; liutimer). While most datasets are collected from real-world records, a small portion (0.05%) is generated synthetically to enhance pattern diversity, following KernelSynth proposed by ansari2024chronos. We also leverage substantial meteorological data (hersbach2020era5) because of the predictability of weather systems. Data of different frequencies encompasses common and comprehensive temporal dynamics.
Figure 3:Ratios of data sources in TimeBench, the pre-training corpora of Sundial. Detailed statistics are provide in Table 4. 5Experiments
We evaluate Sundial on best-recognized zero-shot forecasting benchmarks (Section 5.1) and investigate the scaling behavior of Sundial (Section 5.2). We compare TimeFlow with other training objectives (Section 5.3). We delve into test-time calibration of generative forecasters (Section 5.4). We conduct model adaptation of Sundial, i.e., instruction tuning (Section 5.5) and provide in-depth ablation studies to evaluate our modular enhancement (Section 5.6).
Table 1:Zero-shot forecasting results of time series foundation models on long-term forecasting datasets (Time-Series-Library) (wu2022timesnet). Corresponding prediction lengths include { 96 , 192 , 336 , 720 } . A lower MSE or MAE indicates a better prediction. Averaged results of four prediction lengths are reported here. 1 st Count represents the number of wins achieved by a model under all prediction lengths and datasets. Results of baseline models are officially reported by shi2024time. Datasets in pre-training are not evaluated on corresponding models, which are denoted by the dash ( − ). Full results under all prediction lengths are provided in Table 9. Models
Sundial Small
Sundial Base
Sundial Large
Time-MoE Base
Time-MoE Large
Time-MoE Ultra
Timer-XL
Moirai Base
Moirai Large
Chronos Base
Chronos Large
TimesFM
(Ours)
(Ours)
(Ours)
(shi2024time)
(shi2024time)
(shi2024time)
(liu2024timer)
(woo2024unified)
(woo2024unified)
(ansari2024chronos)
(ansari2024chronos)
(das2023decoder)
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
ETTm1
0.354
0.388
0.336
0.377
0.331
0.369
0.394
0.415
0.376
0.405
0.356
0.391
0.373
0.392
0.406
0.385
0.422
0.391
0.645
0.500
0.555
0.465
0.433
0.418
ETTm2
0.265
0.324
0.258
0.320
0.254
0.315
0.317
0.365
0.316
0.361
0.288
0.344
0.273
0.336
0.311
0.337
0.329
0.343
0.310
0.350
0.295
0.338
0.328
0.346
ETTh1
0.390
0.418
0.411
0.434
0.395
0.420
0.400
0.424
0.394
0.419
0.412
0.426
0.404
0.417
0.417
0.419
0.480
0.439
0.591
0.468
0.588
0.466
0.473
0.443
ETTh2
0.340
0.387
0.333
0.387
0.334
0.387
0.366
0.404
0.405
0.415
0.371
0.399
0.347
0.388
0.362
0.382
0.367
0.377
0.405
0.410
0.455
0.427
0.392
0.406
ECL
0.169
0.265
0.169
0.265
0.166
0.262
0.174
0.278
0.187
0.274
0.186
0.270
0.214
0.278
0.204
0.273
Weather
0.233
0.271
0.234
0.270
0.238
0.275
0.265
0.297
0.270
0.300
0.256
0.288
0.256
0.294
0.287
0.281
0.264
0.273
0.292
0.315
0.279
0.306
1 st Count
7
2
8
5
16
16
0
1
0
0
2
1
1
3
0
2
0
6
0
0
0
0
0
0
Table 2:GIFT-Eval comprises 23 datasets characterized by a variety of frequencies, variate numbers, and prediction lengths. We evaluate zero-shot performance using 100 generated series, being consistent with woo2024unified. A lower MASE or CRPS indicates a better performance. Rank assigns a numerical ranking of all 97 configurations. Baseline results are officially reported by aksugift. Type Statistical Methods Task-Specific Models (Superwised) Time Series Foundation Models (Zero-Shot) Model Naïve Seasonal Auto Auto DeepAR TiDE N-BEATS PTST. iTrans. TimesFM TabPFN Chronos Moirai Sundial Naïve ARIMA Theta (salinas2020deepar) (das2023long) (oreshkin2019n) (nie2022time) (liu2023itransformer) (das2023decoder) (hoo2025tabular) (ansari2024chronos) (woo2024unified) (Ours) MASE 1.260 1.000 0.964 0.978 1.206 0.980 0.842 0.762 0.802 0.680 0.748 0.786 0.809 0.673 CRPS 1.383 1.000 0.770 1.051 0.721 0.652 0.689 0.496 0.524 0.465 0.480 0.551 0.515 0.472 Rank 28.072 26.175 21.515 24.031 18.938 18.557 21.381 10.052 11.320 8.237 8.268 14.309 10.175 9.062 5.1Time Series Forecasting
In this section, we focus on zero-shot forecasting, we compare Sundial with advanced time series foundation models on various benchmarks, including (1) point forecasting: we adopt the long-term forecasting benchmark (wu2022timesnet), which assesses the performance under different forecasting horizons using MSE and MAE; (2) probabilistic forecasting: we experiment on GIFT-Eval (aksugift) and FEV leaderboard (ansari2024chronos), following their official evaluation suite and assessing point (MASE) and probabilistic (CRPS and WQL) metrics. All evaluated datasets are excluded from the pre-training dataset. Model is available on HuggingFace1 and configurations are detailed in Table 5.
5.1.1Point Forecasting
As shown in Table 1, Sundial consistently outperforms other advanced time series foundation models. Compared with the previous state-of-the-art model Time-MoE (shi2024time), the Sundial family using fewer parameters achieves the average MSE reduction of 7.57 % and averaged MAE reduction of 4.71 % . Notably, continuous tokenization allows our model to conduct patch-level forecasting with fewer autoregression steps, while Chronos using point-wise discrete tokenization may not be suitable in long-term forecasting.
Figure 4:Model evaluation on the FEV leaderboard, which includes 27 datasets not seen by Sundial. Baseline models can be categorized into statistical methods fitting on each time series, task-specific deep models trained on each dataset, and pre-trained foundation models. Pre-trained Models that have seen several datasets during pre-training are denoted as Pre-trained Models (Other). A lower MASE/WQL indicates a better result. Sundial makes probabilistic predictions using 20 generated series, being consistent with ansari2024chronos. 5.1.2Probabilistic Forecasting
Beyond point forecasting, Sundial possesses a unique generative capability for making probabilistic predictions. Following ansari2024chronos, we calculate the median and quantiles using a set of raw predictions of Sundial. While several baseline models have been pre-trained by the consistent objective function for probabilistic evaluation, e.g., quantile loss for WQL, Sundial calculates these statistics for evaluation without any prior knowledge.
GIFT-Eval
Aggregated results are presented in Table 2. The benchmark evaluates performance from 23 datasets and 13 baseline models, encompassing statistical methods, task-specific models, and time series foundation models. Among supervised models and advanced foundation models, Sundial attains the first place in MASE and second place in CRPS on all unseen datasets. While the top PatchTST (nie2022time) is exhaustively trained and tweaked on each dataset, the zero-shot performance of Sundial highlights its simplicity and robustness on this comprehensive benchmark.
FEV Leaderboard
We evaluate our Sundial on the open leaderboard established by AutoGluon (ansari2024chronos), which includes 27 datasets for probabilistic forecasting. As shown in Figure 4, the zero-shot forecasting performance of Sundial exceeds 70 % statistical methods and deep models that are superwisedly trained in distribution. While Sundial is ranked as the second zero-shot pre-trained models after Chronos, Sundial realizes 35 × inference speedup as shown in Figure 5. Based on patch-wise tokenization and multi-patch prediction, our inference speed is near to N-BEATS.
Besides, we provide qualitative showcases in Appendix D. TimeFlow can generate highly eventful and coherent temporal patterns with input series. Beyond the mean or quantiles, our model enables the estimation of arbitrary statistics by sampling directly from the predictive distribution.
Figure 5:Inference time evaluation following ansari2024chronos, which is averaged from the FEV leaderboard. Computing resources of different models are marked. We plot the logarithmic x-axis. 5.2Scalability
From Table 1, the larger Sundial model consistently achieves better performance with the scaling of parameters. Beyond downstream performance, we delve into the utilization of model capacity. Figure 6 shows training curves of different sizes. Compared to Sundial (Small), the large version leads to 15.38 % reduction in the converged training loss, exhibiting promising model capacity of generative forecasters.
Figure 6:Training curves on TimeBench of different model sizes. 5.3TimeFlow Loss
Based on the flow-matching framework, TimeFlow Loss allows autoregressive models to learn and generate flexible distributions while enhancing representation learning. To validate the effectiveness of this design, we implement two alternatives: (1) an MLP network and MSE Loss and (2) a parameterized training objective based on the denoising diffusion procedure (li2024autoregressive). We adopt the same parameterized network and Transformer backbone and pre-train them on TimeBench. Since the converged training loss is not comparable across different objective functions, we compare zero-shot performance in Table 3. Despite allowing for sampling predictions, performance using diffusion-based objective is notably inferior to TimeFlow Loss.
Table 3:Zero-shot performance using different training objectives. We use the same model configuration and pre-training scale. Averaged MSE of four prediction lengths are reported here.
Objective
ETTm1
ETTm2
ETTh1
ETTh2
ECL
Weather
Avg.
TimeFlow
0.336
0.258
0.411
0.333
0.169
0.234
0.290
Diffusion
0.362
0.265
0.444
0.360
0.202
0.252
0.314
MSE
0.360
0.264
0.404
0.341
0.175
0.231
0.296
In addition to zero-shot performance, we provide showcases for quality evaluations in Appendix D.2. Pre-trained models optimized by the specific MSE Loss can only output a single prediction. And the prediction is sometimes over-smooth due to mode collapse (refer to Appendix C.1). Instead, generative modeling can accommodate significantly different future variations even if their lookback series are similar. We also provide a probablistic metric CRPS to compare different objectives in Table 7, which validate that the predictive distribution modeled by TimeFlow is more coherent and diverse than counterpart training objectives. It benefits downstream tasks by generating multiple plausible predictions, conveys various future possibilities and enhances the reliability of decision-making.
5.4Test-Time Calibration
Generative modeling facilitates the flexibility to calibrate the final prediction during inference. Based on the median-based forecasting strategy, i.e., starting from multiple noise of a standard Gaussian and calculating the median of raw predictions, there are two configurations to calibrate final predictions: (1) the number of samples to calculate statistics and (2) sampling steps 𝐾 used for flow-matching. Figure 7 shows the results using different configurations.
Figure 7:We show the MASE (left) and WQL (right) on FEV w.r.t. the number of generated raw predictions (top) and the steps to sample a prediction (down). More predictions or more sampling steps generally achieve better probabilistic metrics.
The top two figures conform to the central limit theorem. Generating more samples leads to more calibrated estimation of prediction and confidence interval. The bottom two figures indicate that using fine-grained steps during the push-forward process can leads to more precise predictions.
The trade-off between inference time and performance reveals the potential of test-time calibration, which does not require retraining models. The generative capability of Sundial provides flexibility for various use cases requiring different levels of uncertainty. In our experiments, sampling 20 predictions with each generated by 50 steps consumes nearly one second on a CPU, which is notably more efficient than tuning deep models or statistical methods. Advanced strategies of sampling and post-processing of raw prediction leave interesting directions for future exploration.
5.5Model Adaptation
Inspired by the prevalence of instruction tuning (wei2021finetuned) that adapts foundation models on a collection of tasks. We fine-tune pre-trained Sundial (Base) on the FEV leaderboard, including short-term tasks with different prediction lengths. Our model is tuned once on all aggregated datasets. We evaluate the performance on unseen test splits (Figure 8). We observe that the performance can be further improved compared to zero-shot forecasting. Furthermore, training from scratch on aggregated datasets results in inferior performance, implying knowledge transfer in pre-trained models.
Figure 8:Performance on the FEV leaderboard, including (1) training Sundial from scratch on all datasets from the FEV leaderboard, (2) zero-shot forecasting using pre-trained Sundial, and (3) fine-tuning once on all datasets from the FEV leaderboard. Figure 9:Ablation studies with respect to architectural enhancements. We report the averaged results of TSLib datasets (wu2022timesnet) from four prediction lengths { 96 , 192 , 336 , 720 } and all six datasets. The context length is set to 2880 and the patch length is 16 . 5.6Ablation Study
We conducted several ablation studies that provide insights into the enhancement made to Sundial’s architecture. We evaluate the overall zero-shot performance on TSLib, which covers six different datasets and four prediction lengths.
RoPE
Prior research (liu2024timer) observed that the introduction of RoPE (su2024roformer) yields better results in supervised forecasting tasks. As shown in Figure 9 (a), RoPE can also improve zero-shot forecasting, presenting a general enhancement for time series foundation models.
Layer Normalization
Pre-LN (baevski2018adaptive) is widely adopted in large language models (touvron2023llama) due to the training stability. As depicted in Figure 9 (b), training with Pre-LN for more iterations yield better performance. In contrast, training with Post-LN, which is the predominant choice in supervised models, may adversely affect downstream results.
FlashAttention and KV Cache
We make it to leverage FlashAttention (dao2022flashattention) and KV Cache to reduce the computational costs. As shown in Figure 9 (c) and (d), they notably reduce 14.8 % memory footprint and 43.6 % inference time without affecting performance.
6Conclusion
In this work, we collect and curate TimeBench, a trillion-scale time series dataset for building time series foundation models, which can benefit the research community. Towards time series foundation models, we delve into tokenization and optimization, presenting contributions in two aspects. First, we demonstrate that continuous tokenization, such as patch tokens, can be more effective and efficient for the time series modality, and generative modeling presents a native approach for learning on continuous-valued time series. Second, we propose a novel training objective to accommodate heterogeneous time series distribution. It endows autoregressive models with an inherent capability to sample from non-categorical distribution. Our pre-trained Sundial models make substantial advances on best-recognized forecasting leaderboards. We hope this work can inspire future paradigms for pre-training time series foundation models and enhance their applicability to real-world applications.
Acknowledgements
This work was supported by the National Natural Science Foundation of China (U2342217 and 62021002), State Grid Ningxia Electric Power Co. Science and Technology Project (SGNXYX00SCJS2400058), and the BNRist Innovation Fund (BNR2024RC01010), the National Engineering Research Center for Big Data Software.
We extend our gratitude to Xingzhuo Guo for his expertise in flow-matching and meticulous proofreading of the method section. We further thank Jialong Wu, Yuezhou Ma and Yu Zhang for insightful discussions about generative models. Their collective support significantly enhanced this work.
Impact Statement
This paper aims to advance the development of time series foundation models. We curated a pre-training dataset from publicly available resources. Our models employ an efficient tokenization and incorporate a generative training objective. The proposed TimeFlow Loss provides insights for training generative foundation models for time series forecasting. We released our pre-trained models that demonstrate notable zero-shot forecasting performance. The generative forecasting paradigm enhances the model reliability for decision-making. Our paper mainly focuses on scientific research and has no obvious negative social impact.
Appendix ADataset Statistics
Large-scale datasets are of paramount importance for pre-training foundation models. Recent research has contributed significant time series datasets (das2023decoder; liutimer; shi2024time). While the scaling law of time series foundation models has been explored in the recent work (shi2024scaling), the pre-training scale remains relatively limited. Given the heterogeneity of time series compared to other modalities, it raises the question of whether it is feasible to learn from enormous series. To address the question, we curated TimeBench with a trillion time points from various domains.
Unlike other modalities, most time series are unavailable on open websites or repositories. There are also limited domains that encompass typical and predictable time series, leading to slow progress on dataset construction. Therefore, we conducted tedious preprocessing, including missing values imputation, abnormalities exclusion, and normalization techniques. We conducted statistical analysis, examining time series through the lenses of intrinsic properties, e.g., non-stationarity, forecastability, and seasonality. This approach allows us to characterize the data quality inherent to time series, which affects the training stability of next-token prediction. We also adopt synthetic techniques to improve pattern diversity. Further, we adopt ERA5 (munoz2021era5), including systematic real-world temporal observations.
The statistical details of TimeBench are summarized in Table 4. In addition to open-source datasets from research teams on time series foundation models (woo2024unified; ansari2024chronos; liutimer; liu2024timer), we collected substantial real-world time series from various domains such as finance, IoT, meteorology, and healthcare (goldberger2000physiobank). These resources enable us to construct large-scale time-series corpora exceeding a trillion time points. The corpora include highly credible and predictable data with a wide range of frequencies, lengths, and numbers of variates, providing comprehensive temporal dynamics and variation patterns to facilitate downstream applications. To prevent data leakage, we exclude all datasets evaluated in Section 5.1 to make sure that Sundial conducts zero-shot forecasting.
Table 4:Key statistics of TimeBench, the pre-training dataset of Sundial. Source Chronos ECG Finance IoT LOSTA Synthetic ERA5 3h ERA 12h ERA5 Daily
ERA5 Weekly
ERA5 Monthly
ERA5 Quarterly
Total
(ansari2024chronos) (goldberger2000physiobank) (Ours) (Ours) (woo2024unified) (ansari2024chronos) (munoz2021era5) (munoz2021era5) (munoz2021era5) (munoz2021era5) (munoz2021era5) (munoz2021era5)
Pts. 94B 48B 10.5B 5.8B 230B 0.5B 129B 32B 406B 58B 13.5B 4.5B 1032B
% 9.11 % 4.65 % 1.02 % 0.56 % 22.29 % 0.05 % 12.50 % 3.10 % 39.35 % 5.62 % 1.31 % 0.44 % 100% Appendix BImplementation Details
All experiments are implemented using PyTorch (paszke2019pytorch) and executed with 32 NVIDIA A100 GPUs. We employ the AdamW optimizer (kingma2014adam) for model optimization. We adopt S3 format (liutimer) for univariate pre-training. During training, data from different domains is sampled according to a predefined ratio to balance the domain weightings and ensure diversity in the training data. We implement a global shuffle strategy by loading time series into a standard parquet format. We use variable-wise normalization to unify the scope of values.
On the FEV leaderboard (ansari2024chronos), which consists of short-term forecasting datasets, we train Sundial models by TimeFlow Loss with the prediction length of 𝐹
16 . For the point forecasting (wu2022timesnet) and GIFT-Eval (aksugift), which consist of forecasting datasets with a prediction length ranging from 6 to 900 , we train Sundial models by TimeFlow Loss with the prediction length of 𝐹
720 . For the required prediction length less than the model prediction length, we truncate the output generated by Sundial. For the required length more than the prediction horizon, we conduct rolling forecasting. Following Chronos (ansari2024chronos), we sample 20 raw predictions to calculate MASE and WQL on FEV. Being consistent to Moirai (woo2024unified), we sample 100 raw predictions to calculate MASE and CRPS for GIFT-Eval. The sampling step is fixed as 𝐾
50 . Configurations of Sundial in different sizes are provided in Table 5. We provide a model summary in Table 6, which summarizes several aspects of current time series foundation models.
Table 5:Model configurations of the Sundial family. Model
Patch Size
Context Length
Prediction Length
Layers
Dimension
MHA Heads
TimeFlow
Total Parameters
( 𝑃 )
( 𝑇 )
( 𝐹 )
( 𝐿 )
( 𝐷 , 𝐷 ff )
𝐻
( 𝐷 tf , 𝐿 tf )
Count
Sundial Small
16
2880
{ 16 , 720 } 6 ( 512 , 2048 )
8
( 512 , 3 )
32 M
Sundial Base
16
2880
{ 16 , 720 } 12 ( 768 , 3072 )
12
( 768 , 3 )
128 M
Sundial Large
16
2880
{ 16 , 720 } 24 ( 1024 , 4096 )
16
( 1024 , 6 )
444 M
∗
𝐷 is the embedding dimension of Transformer. 𝐷 ff is the hidden dimension of FFN. 𝐷 tf is the hidden dimension of the flow-matching network. 𝐿 is the layer number of Transformer. 𝐿 tf is the layer number of the flow-matching network.
Table 6:Comparison of time series foundation models. Architecture denotes the Transformer category. Model Size presents parameter counts of different model sizes. Pre-training Scale measures pre-training datasets in time points. Token Level presents the graininess of time series tokens. Tokenization denotes what kind of values are embedded from time series. Context Length means the input length supported by the model. Probabilistic means generating multiple probable predictions, which is the opposite of deterministic forecasters. Method Sundial Time-MoE Timer-XL Moirai MOMENT LLMTime Chronos Lag-Llama TimesFM (Ours) (shi2024time) (liu2024timer) (woo2024unified) (goswami2024moment) (gruver2024large) (ansari2024chronos) (rasul2023lag) (das2023decoder) Architecture Decoder Decoder Decoder Encoder Encoder Decoder EncDec Decoder Decoder Model Size 32M 113M 84M 14M 40M - 46M 200M 17M 128M 453M 91M 125M 200M 70M 444M 2.4B 311M 385M 710M 200M Pre-training Scale 1032B 300B 260B 231B 1.13B - 84B 0.36B 100B Token Level Patch Point Patch Patch Patch Point Point Point Patch Tokenization
Continuous
Continuous
Continuous
Continuous
Continuous
Discrete
Discrete
Continuous
Continuous
Context Length ≤ 2880 ≤ 4096 ≤ 2880 ≤ 5000 = 512 - ≤ 512 ≤ 1024 ≤ 512 Probabilistic True False False True False True True True False Appendix CSupplementary Results C.1Discussion of Mode Collapse
Mode collapse is a failure of representation learning, where a model generates a limited variety of outputs, ignoring the diversity in the training data. For time series foundation models, mode collapse stems from the heterogeneity of the time series distribution, e.g., a similar lookback time series goes into divergent trending. In other words, the semantics of time series patterns are highly unstable. It sometimes leads to over-smooth predictions from models optimized by MSE becuase the results are global-optimal for this loss (See showcases on the right of Figure 14-15). Such a training objective pre-defines a unimodal predictive distribution of data, which struggles to accommodate large-scale datasets like TimeBench.
Our work addresses this phenomenon through generative modeling. Generative forecasters learn flexible distributions without relying on probabilistic priors. We evaluate the distributional metric Continuous Ranked Probability Score (CRPS) to assess the quality of generated predictions across different training objectives. The results indicate that the predictive distribution modeled by TimeFlow is more coherent and diverse compared to alternative training objectives, particularly on the highly diverse GIFT-Eval (aksugift). It validates the effectiveness of TimeFlow in mitigating mode collapse.
Table 7:Zero-shot probabilistic forecasting performance using different training objectives. Averaged CRPS is reported here. Objective ETTh1 ETTh2 ETTm1 ETTm2 ECL Weather GIFT-Eval TimeFlow Loss 0.0059 0.0037 0.0057 0.0029 0.0082 0.0021 0.5050 Diffusion Loss 0.0082 0.0053 0.0070 0.0039 0.0095 0.0032 0.5340 MSE Loss 0.0063 0.0040 0.0058 0.0032 0.0080 0.0023 0.6420 C.2Scaling Behavior Using More Data
We compare Sundial with other time series foundation models that are pre-trained with smaller datasets: Chronos (ansari2024chronos) is pre-trained on 94 billion time points, and Moirai is pre-trained on 230 billion time points. As their pre-training datasets are part of the subset of TimeBench, we also conduct pre-training on Sundial using these subsets. As shown in Table 8, these results highlight the scaling behavior of Sundial using larger datasets. Additionally, Sundial still achieves better zero-shot forecasting than its counterpart models with the same pre-training dataset.
Table 8:Zero-shot forecasting performance of models trained on different scales of datasets (measured in time points, pts, and 1B means a billion). We report the averaged results from four prediction lengths { 96 , 192 , 336 , 720 } on Time-Series-Library (wu2022timesnet). Model (pts.) Chronos (94B) Moirai (230B) Sundial (94B) Sundial (230B) Sundial (1032B) Dataset MSE MAE MSE MAE MSE MAE MSE MAE MSE MAE ETTh1 0.591 0.468 0.417 0.419 0.402 0.429 0.403 0.419 0.411 0.434 ETTh2 0.405 0.410 0.362 0.382 0.377 0.414 0.364 0.398 0.333 0.387 ETTm1 0.645 0.500 0.406 0.385 0.367 0.402 0.352 0.385 0.336 0.377 ETTm2 0.310 0.350 0.311 0.337 0.280 0.341 0.273 0.334 0.258 0.320 ECL 0.214 0.278 0.187 0.274 0.172 0.269 0.171 0.267 0.169 0.265 Weather 0.292 0.315 0.287 0.281 0.254 0.301 0.252 0.297 0.234 0.270 C.3Performance with Varying Lookback Lengths
Time series foundation models operate independently of training, functioning similarly to statistical methods. Given specific forecasting tasks, one of the most important hyperparameters is the lookback length. Unlike fixed-context models, Sundial offers flexibility for practitioners, allowing the context length to be dynamically adjusted during inference. In Figure 10, we present the performance of Sundial utilizing various lookback lengths. Based on our observations, we contend that performance is largely dependent on the forecasting task itself. Specifically, the size of the lookback window can be tuned to meet the forecasting horizon and data periodicity. Time series foundation models provide a training-free approach for rapid adjustments; still, they should enhance their fundamental long-context capabilities to handle high-frequency data.
Figure 10:Zero-shot forecasting performance using different lookback lengths in { 480 , 960 , 1440 , 1920 , 2400 , 2880 } . We report the averaged results from four prediction lengths { 96 , 192 , 336 , 720 } on Time-Series-Library (wu2022timesnet). C.4Zero-Shot Results of Point Forecasting
Table 9 provides full zero-shot results on Time-Series-Library forecasting benchmark (wu2022timesnet), including prediction horizons in { 96 , 192 , 336 , 720 } . We build Sundial with different model sizes with configurations in Table 5. The context length is fixed as 2880 . We truncate the model’s predictions for tasks requiring a prediction length less than 𝐹
720 .
We compare the most advanced time series foundation models based on their official checkpoints, including Time-MoE (shi2024time), Timer (liu2024timer; liutimer), Moirai (woo2024unified), TimesFM (das2023decoder), and Chronos (ansari2024chronos). We conduct zero-shot evaluations on datasets that are not included during the pre-training of the corresponding models. For each of the evaluated model, we use their maximum input length during inference. Metrics (MSE/MAE) are calculated from all predicted windows in the test split of each dataset following liu2024timer.
C.5Zero-Shot Results on GIFT-Eval and FEV Leaderboard
We evaluate our models on GIFT-Eval, a benchmark designed to comprehensively assess forecasting performance across diverse time series. GIFT-Eval includes 23 datasets covering 144 , 000 time series and 177 million data points, which constitute a total of 97 forecasting configurations. We use the official evaluation suite established by the research team of Salesforce and report aggregated results in Table 2. We evaluate the performance and inference time on the FEV leaderboard, which was originally proposed by ansari2024chronos and established by AutoGluon, which comprises 27 datasets for zero-shot evaluation. We report aggregated metrics in Figure 4 and assess the inference time in Figure 5. We released the detailed results by submitting Sundial to their open benchmark2.
Appendix DShowcases D.1Showcases of Sundial
Figure 11-13 present zero-shot forecasting showcases on all the datasets from FEV (ansari2024chronos) and TSLib (wu2022timesnet). By generating 20 predictions with different initial noise, we estimate the median and 80 % prediction interval.
Table 9:Zero-shot forecasting results of time series foundation models on long-term forecasting datasets (wu2022timesnet). A lower MSE or MAE indicates a better prediction. Averaged results of four prediction lengths are reported here. 1 st Count represents the number of wins achieved by a model under all prediction lengths and datasets. Results of baseline models are officially reported by shi2024time. Datasets for pre-training are not evaluated on corresponding models, which are denoted by the dash ( − ). Models
Sundial Small
Sundial Base
Sundial Large
Time-MoE Base
Time-MoE Large
Time-MoE Ultra
Timer-XL
Moirai Base
Moirai Large
Chronos Base
Chronos Large
TimesFM
(Ours)
(Ours)
(Ours)
(shi2024time)
(shi2024time)
(shi2024time)
(liu2024timer)
(woo2024unified)
(woo2024unified)
(ansari2024chronos)
(ansari2024chronos)
(das2023decoder)
Metric
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
MSE
MAE
ETTm1
96
0.292
0.342
0.280
0.334
0.273
0.329
0.338
0.368
0.309
0.357
0.281
0.341
0.317
0.356
0.363
0.356
0.380
0.361
0.454
0.408
0.457
0.403
0.361
0.370
192
0.337
0.376
0.321
0.366
0.312
0.357
0.353
0.388
0.346
0.381
0.305
0.358
0.358
0.381
0.388
0.375
0.412
0.383
0.567
0.477
0.530
0.450
0.414
0.405
336
0.370
0.401
0.350
0.389
0.343
0.378
0.381
0.413
0.373
0.408
0.369
0.395
0.386
0.401
0.416
0.392
0.436
0.400
0.662
0.525
0.577
0.481
0.445
0.429
720
0.418
0.433
0.394
0.418
0.397
0.413
0.504
0.493
0.475
0.477
0.469
0.472
0.430
0.431
0.460
0.418
0.462
0.420
0.900
0.591
0.660
0.526
0.512
0.471
Avg
0.354
0.388
0.336
0.377
0.331
0.369
0.394
0.415
0.376
0.405
0.356
0.391
0.373
0.392
0.406
0.385
0.422
0.391
0.645
0.500
0.555
0.465
0.433
0.418
ETTm2
96
0.178
0.260
0.170
0.256
0.172
0.255
0.201
0.291
0.197
0.286
0.198
0.288
0.189
0.277
0.205
0.273
0.211
0.274
0.199
0.274
0.197
0.271
0.202
0.270
192
0.235
0.304
0.229
0.300
0.227
0.296
0.258
0.334
0.250
0.322
0.235
0.312
0.241
0.315
0.275
0.316
0.281
0.318
0.261
0.322
0.254
0.314
0.289
0.321
336
0.287
0.342
0.281
0.337
0.275
0.331
0.324
0.373
0.337
0.375
0.293
0.348
0.286
0.348
0.329
0.350
0.341
0.355
0.326
0.366
0.313
0.353
0.360
0.366
720
0.360
0.390
0.351
0.387
0.343
0.378
0.488
0.464
0.480
0.461
0.427
0.428
0.375
0.402
0.437
0.411
0.485
0.428
0.455
0.439
0.416
0.415
0.462
0.430
Avg
0.265
0.324
0.258
0.320
0.254
0.315
0.317
0.365
0.316
0.361
0.288
0.344
0.273
0.336
0.311
0.337
0.329
0.343
0.310
0.350
0.295
0.338
0.328
0.346
ETTh1
96
0.341
0.381
0.348
0.385
0.346
0.383
0.357
0.381
0.350
0.382
0.349
0.379
0.369
0.391
0.376
0.392
0.381
0.388
0.440
0.393
0.441
0.390
0.414
0.404
192
0.381
0.408
0.393
0.418
0.386
0.410
0.384
0.404
0.388
0.412
0.395
0.413
0.405
0.413
0.412
0.413
0.434
0.415
0.492
0.426
0.502
0.524
0.465
0.434
336
0.405
0.424
0.422
0.440
0.410
0.426
0.411
0.434
0.411
0.430
0.447
0.453
0.418
0.423
0.433
0.428
0.485
0.445
0.550
0.462
0.576
0.467
0.503
0.456
720
0.433
0.458
0.481
0.493
0.438
0.459
0.449
0.477
0.427
0.455
0.457
0.462
0.423
0.441
0.447
0.444
0.611
0.510
0.882
0.591
0.835
0.583
0.511
0.481
Avg
0.390
0.418
0.411
0.434
0.395
0.420
0.400
0.424
0.394
0.419
0.412
0.426
0.404
0.417
0.417
0.419
0.480
0.439
0.591
0.468
0.588
0.466
0.473
0.443
ETTh2
96
0.272
0.332
0.271
0.333
0.269
0.330
0.305
0.359
0.302
0.354
0.292
0.352
0.283
0.342
0.294
0.330
0.296
0.330
0.308
0.343
0.320
0.345
0.315
0.349
192
0.329
0.374
0.327
0.376
0.325
0.373
0.351
0.386
0.364
0.385
0.347
0.379
0.340
0.379
0.365
0.375
0.361
0.371
0.384
0.392
0.406
0.399
0.388
0.395
336
0.357
0.399
0.354
0.402
0.354
0.400
0.391
0.418
0.417
0.425
0.406
0.419
0.366
0.400
0.376
0.390
0.390
0.390
0.429
0.430
0.492
0.453
0.422
0.427
720
0.401
0.442
0.381
0.435
0.389
0.443
0.419
0.454
0.537
0.496
0.439
0.447
0.397
0.431
0.416
0.433
0.423
0.418
0.501
0.477
0.603
0.511
0.443
0.454
Avg
0.340
0.387
0.333
0.387
0.334
0.387
0.366
0.404
0.405
0.415
0.371
0.399
0.347
0.388
0.362
0.382
0.367
0.377
0.405
0.410
0.455
0.427
0.392
0.406
ECL
96
0.134
0.231
0.132
0.229
0.130
0.227
0.141
0.237
0.160
0.250
0.153
0.241
0.154
0.231
0.152
0.229
192
0.154
0.251
0.152
0.250
0.150
0.247
0.159
0.254
0.175
0.263
0.169
0.255
0.179
0.254
0.172
0.250
336
0.174
0.271
0.173
0.271
0.170
0.268
0.177
0.272
0.187
0.277
0.187
0.273
0.214
0.284
0.203
0.276
720
0.215
0.307
0.218
0.311
0.214
0.307
0.219
0.308
0.228
0.309
0.237
0.313
0.311
0.346
0.289
0.337
Avg
0.169
0.265
0.169
0.265
0.166
0.262
0.174
0.278
0.187
0.274
0.186
0.270
0.214
0.278
0.204
0.273
Weather
96
0.158
0.206
0.157
0.205
0.157
0.208
0.160
0.214
0.159
0.213
0.157
0.211
0.171
0.225
0.220
0.217
0.199
0.211
0.203
0.238
0.194
0.235
192
0.205
0.253
0.205
0.251
0.207
0.256
0.210
0.260
0.215
0.266
0.208
0.256
0.221
0.271
0.271
0.259
0.246
0.251
0.256
0.290
0.249
0.285
336
0.254
0.290
0.253
0.289
0.259
0.295
0.274
0.309
0.291
0.322
0.255
0.290
0.274
0.311
0.286
0.297
0.274
0.291
0.314
0.336
0.302
0.327
720
0.315
0.336
0.320
0.336
0.327
0.342
0.418
0.405
0.415
0.400
0.405
0.397
0.356
0.370
0.373
0.354
0.337
0.340
0.397
0.396
0.372
0.378
Avg
0.233
0.271
0.234
0.270
0.238
0.275
0.265
0.297
0.270
0.300
0.256
0.288
0.256
0.294
0.287
0.281
0.264
0.273
0.292
0.315
0.279
0.306
1 st Count
7
2
8
5
16
16
0
1
0
0
2
1
1
3
0
2
0
6
0
0
0
0
0
0
∗ Traffic (trafficdata) is not evaluated because it is included in the pre-training datasets of these time series foundation models.
Figure 11:Showcases of zero-shot predictions from Sundial (Base) on the FEV leaderboard (ansari2024chronos). Figure 12:Showcases of zero-shot predictions from Sundial (Base) on the FEV leaderboard (ansari2024chronos). Figure 13:Showcases of zero-shot predictions from Sundial (Base) on long-term forecasting datasets (wu2022timesnet). D.2Showcases of Generative Forecasters and Deterministic Forecasters
As we introduce generative modeling in time series foundation models, we compare zero-shot forecasting showcases from two types of models, including (1) Sundial, a generative forecaster pre-trained by TimeFlow, which can predict multiple future possibilities based on a lookback series. (2) Using the same backbone and TimeBench, a Transformer pre-trained by MSE Loss. As a deterministic forecaster, the model can only output the mean prediction. As depicted in Figure 14-15, the unimodal Gaussian prior specified by MSE can be infeasible to handle large-scale pre-training, manifested as sometimes over-smooth predictions in downstream forecasting tasks. Therefore, we hope this work can inspire future paradigms for pre-training time series foundation models and enhance their applicability to real-world scenarios.
Figure 14:Showcases of Sundial (Left) and the same Transformer backbone pre-trained by MSE Loss (Right). MSE Loss optimizes a deterministic forecaster: given a lookback series, the model can only produce one prediction as the estimation of mean values. This objective may fail to accommodate divergent future variations during large-scale pre-training, leading to mode collapse and over-smooth results (as illustrated in the fourth row). TimeFlow optimizes a generative forecaster: it generates various possibilities observed in the pre-training dataset. Based on these raw predictions, we can estimate the underlying complicated distribution and different statistics. Besides, the greater concentration in generated predictions, the higher the model’s confidence in its predictions. Figure 15:Supplementary showcases of Sundial (Left) and the same Transformer backbone pre-trained by MSE Loss (Right). Appendix ELimitations
Our models represent an initial effort to incorporate generative modeling into time series foundation models, which enables pre-training on heterogeneous time series without specifying any prior distribution. This approach mitigates mode collapse in representation learning and generates a diverse range of probable predictions compared to previous deterministic forecasters. Despite significant progress in enlarging model capacity, the Sundial family may still face hallucinations. The performance on very high-frequency data is not guaranteed, since TimeBench contains many middle- and low-frequency time series. Therefore, an important future direction is to generalize Sundial to multi-scale time series. This situation may also indicate new opportunities during inference. As we only adopt a naïve sampling strategy that begins with random Gaussian noise, it leaves much room for future improvement in sampling strategy and post-processing, such as frequency normalization.
Another aspect of future development lies in model adaptation. Sundial is pre-trained in a univariate approach to address the discrepancy in variate numbers, which prevents it from explicitly utilizing variate correlations or covariate information. As an increasing number of studies address 2D dimensionality, multivariate pre-training is likely to be conducted for domain-specific time series foundation models. Lastly, while autoregressive models provide flexibility in the input context length, multiple steps of autoregression may still lead to over-smooth predictions and unreliable results.
Appendix FSocietal Impacts F.1Real-World Applications
In this work, we present Sundial, a family of time series foundation models to facilitate out-of-the-box forecasting. Our models employ native tokenization for continuous-valued time series and incorporate a flexible training objective, proposed as TimeFlow Loss, to enable probabilistic forecasting. With an unprecedented model capacity and a trillion-scale dataset, our models can be used directly or adapted for various forecasting scenarios, such as energy planning, weather forecasting, and financial risk prevention. With multiple predictions generation and a just-in-time inference speed, our model enhances the reliability of decision-making and streamlines the forecasting pipeline for practitioners. This paper primarily focuses on scientific research and does not present any evident negative social impact.
F.2Academic Research
We curate TimeBench, a trillion-level time series dataset for pre-training foundation models for time series analysis, which we believe will be beneficial to the research community. Technically, we propose a TimeFlow Loss to facilitate the learning of flexible next-patch distributions. Conditioned on the lookback representations acquired by autoregressive Transformers, our model is endowed with a novel generative capability for probabilistic forecasting, enhancing representation learning of Transformers without the need for discrete tokenization. Through pre-training on an unprecedented scale, we identify subtle scalability bottlenecks that are not solely attributable to architectural design but are predominantly influenced by the training objectives of foundation models. The proposed TimeFlow Loss applied to autoregressive and generative models may provide insights for the future development of time series foundation models.
Generated on Wed Nov 5 11:23:59 2025 by LaTeXML Report Issue Report Issue for Selection
Xet Storage Details
- Size:
- 69.3 kB
- Xet hash:
- b38893f95f1f9135bb9fc68632e5dbb9968833a6380bca20c3ead6b496068915
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.