ortner commited on
Commit
474fb63
·
1 Parent(s): ea2c37f

Updated README

Browse files
Files changed (3) hide show
  1. README.md +96 -88
  2. figs/FlowState.png +2 -2
  3. figs/flowstate_performance.png +2 -2
README.md CHANGED
@@ -1,88 +1,96 @@
1
- ---
2
- license: apache-2.0
3
- main_branch: r1.1
4
- ---
5
- # FlowState
6
- [Paper](https://www.arxiv.org/abs/2508.05287) | [HuggingFace Model Card](https://huggingface.co/ibm-research/flowstate) | [GitHub Model Code](https://github.com/ibm-granite/granite-tsfm/tree/main/tsfm_public/models/flowstate)
7
-
8
- ![Illustration](figs/FlowState.png)
9
- FlowState is the first time-scale adjustable Time Series Foundation Model (TSFM), open-sourced by IBM Research.
10
- Combining an State Space Model (SSM) Encoder with a Functional Basis Decoder allows FlowState to transition into a timescale invariant coefficient space and make a continuous forecast from this space.
11
- This allows FlowState to seamlessly adjust to all possible sampling rates.
12
- Therefore, training in one time-scale helps for inference at all scales, allowing for drastically improved utilization of training data across time-scales.
13
- This innovation leads to a significant improvement in performance, making FlowState the new state-of-the art in zero-shot time series forecasting.
14
- ## Key Features
15
- - **FlowState**: We present an SSM-based time series foundation model that can be dynamically adjusted to the specific characteristics of the time series during evaluation.
16
- - **Functional Basis Decoder (FBD)**: We propose a novel decoder, as a critical component of FlowState, that utilizes a set of continuous basis functions to make continuous forecasts and allow seamless adjustment to specific input characteristics.
17
- - **Flexible temporal adaptation**: FlowState can dynamically adjust the context and target length to the timescale of the provided time series.
18
- - **Compact and high-performing**: With fewer than 10M parameters and the ability to forecast multiple consecutive patches in parallel, FlowState delivers state-of-the-art accuracy with exceptional efficiency.
19
-
20
- This model card contains the model-weights for research-use only and full reproducibility of our results published in our [paper](https://www.arxiv.org/abs/2508.05287). However - if you are looking for the FlowState model weights for commercial and enterprise use, please refer to our granite releases [here](https://huggingface.co/ibm-granite/granite-timeseries-flowstate-r1)
21
-
22
- ## Benchmark Highlights
23
- ![Illustration](figs/flowstate_performance.png)
24
- Despite being **more than 10x smaller** than the 3 next best models,
25
- FlowState is the **best Zero-Shot model** on the [GIFT-Eval Leaderboard](https://huggingface.co/spaces/Salesforce/GIFT-Eval).
26
- The Figure compares GIFT MASE Performance vs. model size for FlowState and the 10 next best Zero-Shot Models, as of Sep. 9th 2025.
27
- ## Model Details
28
- Model Details can be found in our [Paper](https://www.arxiv.org/abs/2508.05287).
29
- Currently FlowState only supports zero-shot forecasting.
30
- ## Recommended Use
31
- FlowState can be used to make predictions as follows:
32
- ```Python
33
- from tsfm_public import FlowStateForPrediction
34
- import torch
35
- device= 'cuda'
36
- # Download FlowState Research checkpoint (non-commercial use):
37
- predictor = FlowStateForPrediction.from_pretrained("ibm-research/flowstate").to(device)
38
- time_series = torch.randn((2048, 32, 1), device=device) # context, batch, n_ch
39
- forecast = predictor(time_series, scale_factor=0.25, prediction_length=960, batch_first=False)
40
- print(forecast.prediction_outputs.shape) # torch.Size([32, 9, 48, 1]) (batch, quantiles, forecast_length, n_ch)
41
- ```
42
- It is recommended for users to determine a suitable scale factor for their specific time series data, as explained in the next section.
43
- #### Temporal Scaling
44
- For common sampling rates, we recommend the following scaling factors.
45
- | Sampling Rate | Recommended Scale Factor |
46
- |---------------|---------------------------|
47
- | 15 min | 0.25 |
48
- | 30 min | 0.5 |
49
- | Hourly | 1.0 |
50
- | Daily | 3.43 if data has a weekly cylce, else 0.0656 |
51
- | Weekly | 0.46 |
52
- | Monthly | 2 |
53
-
54
- For optimal performance it is recommended to first determine the seasonality of their data and to calculate the scale factor.
55
-
56
- Assuming data has repeating structures every N=96 time steps (such as quarter hourly sampled data with a daily cycle), resulting in seasonality 96, the scale factor can be calculated as follows:
57
-
58
- scale_factor = Base Seasonality / N = 24 / 96 = 0.25
59
-
60
- Where 24 is the base seasonality used during pretraining.
61
- If the seasonality is unclear, it is best to experiment with different scale factors and select what works best.
62
- We recommend forecasting no more than 30 seasons (in our example 96*30=2880 time steps).
63
- Afterward, forecasting quality declines.
64
- ## Installation
65
- To run FlowState follow the installation instructions [here](https://github.com/ibm-granite/granite-tsfm/?tab=readme-ov-file#initial-setup).
66
- For the GIFT evaluation notebook we recommend using python 3.11, and installing gift-eval according to their [repo](https://github.com/SalesforceAIResearch/gift-eval).
67
- ## Example Recipes and Notebooks
68
- - Getting started notebook: [here](https://github.com/ibm-granite/granite-tsfm/tree/main/notebooks/hfdemo/flowstate_getting_started.ipynb)
69
- - GIFT Eval Notebook: [here](https://github.com/ibm-granite/granite-tsfm/tree/main/notebooks/hfdemo/flowstate_gift_eval.ipynb.)
70
- ## Pretraining Data
71
- As pretraining data, we used a subset of [Gift-Eval Pretrain](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain), and a subset of the [Chronos Pretraining Data Corpus](https://huggingface.co/datasets/autogluon/chronos_datasets).
72
- None of the used datasets (or sub/up-sampled versions thereof) are contained in Gift-Eval (neither train, validation nor test split).
73
- All our Gift-Eval results are Zero-Shot.
74
- ## Citation
75
- Please cite the following paper if you intend to use our model or its associated architectures/approaches in your work.
76
- ### BibTeX:
77
- ```
78
- @article{graf2025flowstate,
79
- title={FlowState: Sampling Rate Invariant Time Series Forecasting},
80
- author={Graf, Lars and Ortner, Thomas and Wo{\'L}{\c{s}}niak, Stanis{\'L} and Pantazi, Angeliki and others},
81
- journal={arXiv preprint arXiv:2508.05287},
82
- year={2025}
83
- }
84
- ```
85
- ## Model Card Authors
86
- Lars Graf, Thomas Ortner, Stanislaw Wozniak, Angeliki Pantazi
87
- ## IBM Public Repository Disclosure
88
- All content in this repository including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # FlowState
5
+ [Paper](https://www.arxiv.org/abs/2508.05287) | [HuggingFace Model Card](https://huggingface.co/ibm-research/flowstate) | [GitHub Model Code](https://github.com/ibm-granite/granite-tsfm/tree/gift-flowstate/tsfm_public/models/flowstate)
6
+
7
+ ![Illustration](figs/FlowState.png)
8
+ FlowState is the first time-scale adjustable Time Series Foundation Model (TSFM), open-sourced by IBM Research.
9
+ Combining an State Space Model (SSM) Encoder with a Functional Basis Decoder allows FlowState to transition into a timescale invariant coefficient space and make a continuous forecast from this space.
10
+ This allows FlowState to seamlessly adjust to all possible sampling rates.
11
+ Therefore, training in one time-scale helps for inference at all scales, allowing for drastically improved utilization of training data across time-scales.
12
+ This innovation leads to a significant improvement in performance, making FlowState the new state-of-the art in zero-shot time series forecasting.
13
+ ## Update: Changes from 1.0 to 1.1
14
+ - Inclusion of synthetic pre-training data following [CauKer](https://github.com/ShifengXIE/CauKer)
15
+ - Increaded pre-training context 2048 -> 4096
16
+ - Improved S5 Layer with an output gating mechanism
17
+ - Optimized hyperparameters
18
+ - Larger MLP layer
19
+ - Total number of parameters: 18.5M
20
+ **Important**: To use FlowState-r1.1 include the `revision=r1.1` argument when loading the model, as demonstrated in the `Recommended Use` section.
21
+ If no `revision` argument is provided, the version 1.0 is used.
22
+ ## Key Features
23
+ - **FlowState**: We present an SSM-based time series foundation model that can be dynamically adjusted to the specific characteristics of the time series during evaluation.
24
+ - **Functional Basis Decoder (FBD)**: We propose a novel decoder, as a critical component of FlowState, that utilizes a set of continuous basis functions to make continuous forecasts and allow seamless adjustment to specific input characteristics.
25
+ - **Flexible temporal adaptation**: FlowState can dynamically adjust the context and target length to the timescale of the provided time series.
26
+ - **Compact and high-performing**: With fewer than 10M parameters and the ability to forecast multiple consecutive patches in parallel, FlowState delivers state-of-the-art accuracy with exceptional efficiency.
27
+
28
+ This model card contains the model-weights for research-use only and full reproducibility of our results published in our [paper](https://www.arxiv.org/abs/2508.05287). However - if you are looking for the FlowState model weights for commercial and enterprise use, please refer to our granite releases [here](https://huggingface.co/ibm-granite/granite-timeseries-flowstate-r1)
29
+
30
+ ## Benchmark Highlights
31
+ ![Illustration](figs/flowstate_performance.png)
32
+ Despite being **more than 10x smaller** than the 3 next best models,
33
+ FlowState is the **best Zero-Shot model** on the [GIFT-Eval Leaderboard](https://huggingface.co/spaces/Salesforce/GIFT-Eval).
34
+ The Figure compares GIFT MASE Performance vs. model size for FlowState and the 10 next best Zero-Shot Models, as of Sep. 9th 2025.
35
+ ## Model Details
36
+ Model Details can be found in our [Paper](https://www.arxiv.org/abs/2508.05287).
37
+ Currently FlowState only supports zero-shot forecasting.
38
+ ## Recommended Use
39
+ FlowState can be used to make predictions as follows:
40
+ ```Python
41
+ from tsfm_public import FlowStateForPrediction
42
+ import torch
43
+ device= 'cuda'
44
+ # Download FlowState Research checkpoint (non-commercial use):
45
+ predictor = FlowStateForPrediction.from_pretrained("ibm-research/flowstate", revision="r1.1").to(device)
46
+ time_series = torch.randn((2048, 32, 1), device=device) # context, batch, n_ch
47
+ forecast = predictor(time_series, scale_factor=0.25, prediction_length=960, batch_first=False)
48
+ print(forecast.prediction_outputs.shape) # torch.Size([32, 9, 48, 1]) (batch, quantiles, forecast_length, n_ch)
49
+ ```
50
+ It is recommended for users to determine a suitable scale factor for their specific time series data, as explained in the next section.
51
+ #### Temporal Scaling
52
+ For common sampling rates, we recommend the following scaling factors.
53
+ | Sampling Rate | Recommended Scale Factor |
54
+ |---------------|---------------------------|
55
+ | 15 min | 0.25 |
56
+ | 30 min | 0.5 |
57
+ | Hourly | 1.0 |
58
+ | Daily | 3.43 if data has a weekly cylce, else 0.0656 |
59
+ | Weekly | 0.46 |
60
+ | Monthly | 2 |
61
+
62
+ For optimal performance it is recommended to first determine the seasonality of their data and to calculate the scale factor.
63
+
64
+ Assuming data has repeating structures every N=96 time steps (such as quarter hourly sampled data with a daily cycle), resulting in seasonality 96, the scale factor can be calculated as follows:
65
+
66
+ scale_factor = Base Seasonality / N = 24 / 96 = 0.25
67
+
68
+ Where 24 is the base seasonality used during pretraining.
69
+ If the seasonality is unclear, it is best to experiment with different scale factors and select what works best.
70
+ We recommend forecasting no more than 30 seasons (in our example 96*30=2880 time steps).
71
+ Afterward, forecasting quality declines.
72
+ ## Installation
73
+ To run FlowState follow the installation instructions [here](https://github.com/ibm-granite/granite-tsfm/tree/gift-flowstate/?tab=readme-ov-file#initial-setup).
74
+ For the GIFT evaluation notebook we recommend using python 3.11, and installing gift-eval according to their [repo](https://github.com/SalesforceAIResearch/gift-eval).
75
+ ## Example Recipes and Notebooks
76
+ - Getting started notebook: [here](https://github.com/ibm-granite/granite-tsfm/tree/gift-flowstate/notebooks/hfdemo/flowstate_getting_started.ipynb)
77
+ - GIFT Eval Notebook: [here](https://github.com/ibm-granite/granite-tsfm/tree/gift-flowstate/notebooks/hfdemo/flowstate_gift_eval.ipynb.)
78
+ ## Pretraining Data
79
+ As pretraining data, we used a subset of [Gift-Eval Pretrain](https://huggingface.co/datasets/Salesforce/GiftEvalPretrain), and a subset of the [Chronos Pretraining Data Corpus](https://huggingface.co/datasets/autogluon/chronos_datasets).
80
+ None of the used datasets (or sub/up-sampled versions thereof) are contained in Gift-Eval (neither train, validation nor test split).
81
+ All our Gift-Eval results are Zero-Shot.
82
+ ## Citation
83
+ Please cite the following paper if you intend to use our model or its associated architectures/approaches in your work.
84
+ ### BibTeX:
85
+ ```
86
+ @article{graf2025flowstate,
87
+ title={FlowState: Sampling Rate Invariant Time Series Forecasting},
88
+ author={Graf, Lars and Ortner, Thomas and Wo{\'L}{\c{s}}niak, Stanis{\'L} and Pantazi, Angeliki and others},
89
+ journal={arXiv preprint arXiv:2508.05287},
90
+ year={2025}
91
+ }
92
+ ```
93
+ ## Model Card Authors
94
+ Lars Graf, Thomas Ortner, Stanislaw Wozniak, Angeliki Pantazi
95
+ ## IBM Public Repository Disclosure
96
+ All content in this repository including code has been provided by IBM under the associated open source software license and IBM is under no obligation to provide enhancements, updates, or support. IBM developers produced this code as an open source project (not as an IBM product), and IBM makes no assertions as to the level of quality nor security, and will not be maintaining this code going forward.
figs/FlowState.png CHANGED

Git LFS Details

  • SHA256: 92eb6956f449a3e5611e98d4f8be76ff2206da55f1086dc7ca6ab7a84413335b
  • Pointer size: 131 Bytes
  • Size of remote file: 565 kB

Git LFS Details

  • SHA256: f31d78d7ccb54173ed6642beaf9b5b54f42811eb9d53a2c1f520868d65e7b764
  • Pointer size: 131 Bytes
  • Size of remote file: 601 kB
figs/flowstate_performance.png CHANGED

Git LFS Details

  • SHA256: 0a8c6d33fc36890ac951f6153dbb2861b1a47e10afb730b934ecdf8ca696b699
  • Pointer size: 132 Bytes
  • Size of remote file: 1.07 MB

Git LFS Details

  • SHA256: 8c6f7ffd0761e2a83b1cedca4002b2aa20ebb22f163376ab6c3ac3522765c23b
  • Pointer size: 131 Bytes
  • Size of remote file: 256 kB