ccloud0525 commited on
Commit
1dde777
·
1 Parent(s): 2e843e9

feat: update README

Browse files
Files changed (1) hide show
  1. README.md +59 -7
README.md CHANGED
@@ -17,8 +17,6 @@ tags:
17
 
18
 
19
 
20
-
21
-
22
  ## Introduction
23
 
24
  Aurora is a highly capable multimodal time series foundation model. Based on the **Modality-Guided Multi-head Self-Attention** and **Prototype-Guided Flow Matching**, Aurora can effectively utilize the domain-specific knowledge contained in modalities and support generative probabilistic forecasting, thus covering versatile forecasting scenarios.
@@ -38,11 +36,12 @@ In this work, we pretrain Aurora in a cross-modality paradigm, which adopts Chan
38
  <img alt="intro" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/d82jT96jiGD0QL9s8RYg-.png" width="100%"/>
39
  </div>
40
 
 
41
  ## Quickstart
42
 
43
  #### From pypi (recommended)
44
 
45
- We have publised Aurora on PyPi, **you can directly install it with one line of code!**
46
 
47
  ```shell
48
  $ pip install aurora-model==0.1.0
@@ -50,15 +49,18 @@ $ pip install aurora-model==0.1.0
50
 
51
  Then you can use the Aurora model to make zero-shot probabilistic forecasting!
52
 
 
 
53
  ```python
54
  from aurora import load_model
55
  import os
 
56
  # os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
57
  model = load_model()
58
 
59
  # prepare input
60
  batch_size, lookback_length = 1, 528
61
- seqs = torch.randn(batch_size, lookback_length)
62
 
63
  # Note that Sundial can generate multiple probable predictions
64
  forecast_length = 96
@@ -77,6 +79,54 @@ print(output.shape)
77
 
78
 
79
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
80
  #### From raw code
81
 
82
  We release the original code of Aurora in this repo. You can also download the pretrained checkpoints in our [huggingface](https://huggingface.co/DecisionIntelligence/Aurora) repo and put them in the folder: aurora/.
@@ -92,6 +142,7 @@ $ pip install transformers[torch]
92
  ```python
93
  from huggingface_hub import snapshot_download
94
  import os
 
95
  # os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
96
 
97
  # --- Configuration ---
@@ -135,7 +186,7 @@ model = AuroraForPrediction.from_pretrained("./",trust_remote_code=True)
135
 
136
  # prepare input
137
  batch_size, lookback_length = 1, 528
138
- seqs = torch.randn(batch_size, lookback_length)
139
 
140
  # Note that Sundial can generate multiple probable predictions
141
  forecast_length = 96
@@ -149,11 +200,10 @@ output = model.generate(inputs=seqs, max_output_length=forecast_length, num_samp
149
 
150
  # use raw predictions for mean/quantiles/confidence-interval estimation
151
  print(output.shape)
 
152
 
153
 
154
 
155
- ```
156
-
157
  ## Experiments
158
 
159
  You should refer to our [github repo](https://github.com/decisionintelligence/Aurora) for the complete experimental pipelines. For benchmarking (TSFM-Bench, ProbTS, TimeMMD, TFB, and EPF), you can install additional packages based on the requirement files under folders, and the datasets can be fetched from this [link](https://drive.google.com/file/d/12tJk858WaoG7ZVSvUq8KU1oHfGNJrARF/view?usp=drive_link). All experimental results can be reproduced by running the scripts in the benchmark folder:
@@ -175,6 +225,8 @@ TFB/scripts/run_aurora_tfb.sh
175
  TFB/scripts/run_aurora_uni.sh
176
  ```
177
 
 
 
178
  ## Performance
179
 
180
  **Aurora ahieves consistent state-of-the-art performance on these 5 benchmarks:**
 
17
 
18
 
19
 
 
 
20
  ## Introduction
21
 
22
  Aurora is a highly capable multimodal time series foundation model. Based on the **Modality-Guided Multi-head Self-Attention** and **Prototype-Guided Flow Matching**, Aurora can effectively utilize the domain-specific knowledge contained in modalities and support generative probabilistic forecasting, thus covering versatile forecasting scenarios.
 
36
  <img alt="intro" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/d82jT96jiGD0QL9s8RYg-.png" width="100%"/>
37
  </div>
38
 
39
+
40
  ## Quickstart
41
 
42
  #### From pypi (recommended)
43
 
44
+ We have published Aurora on PyPi, **you can directly install it with one line of code!**
45
 
46
  ```shell
47
  $ pip install aurora-model==0.1.0
 
49
 
50
  Then you can use the Aurora model to make zero-shot probabilistic forecasting!
51
 
52
+ ##### Unimodal Time Series Forecasting
53
+
54
  ```python
55
  from aurora import load_model
56
  import os
57
+ import torch
58
  # os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
59
  model = load_model()
60
 
61
  # prepare input
62
  batch_size, lookback_length = 1, 528
63
+ seqs = torch.randn(batch_size, lookback_length).cuda()
64
 
65
  # Note that Sundial can generate multiple probable predictions
66
  forecast_length = 96
 
79
 
80
 
81
 
82
+ ##### Multimodal Time Series Forecasting
83
+
84
+ ```python
85
+ from aurora import load_model
86
+ from einops import rearrange
87
+ import os
88
+ import torch
89
+ # os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
90
+ model = load_model()
91
+ tokenizer = model.tokenizer
92
+
93
+ # prepare input
94
+ batch_size, n_vars, lookback_length, max_text_length = 1, 10, 528, 200
95
+ seqs = torch.randn(batch_size, lookback_length, n_vars).cuda()
96
+
97
+ text = "1983-09-12: The Federal Register provides a uniform system for making available to the public regulations and legal notices issued by federal agencies in the United States."
98
+
99
+ tokenized_text = tokenizer(text, padding='max_length', truncation=True, max_length=max_text_length, return_tensors="pt")
100
+ text_input_ids = tokenized_text['input_ids'].cuda()
101
+ text_attention_mask = tokenized_text['attention_mask'].cuda()
102
+ text_token_type_ids = tokenized_text.get('token_type_ids', torch.zeros_like(text_input_ids)).cuda()
103
+
104
+ batch_input_ids = text_input_ids.repeat(n_vars, 1)
105
+ batch_attention_mask = text_attention_mask.repeat(n_vars, 1)
106
+ batch_token_type_ids = text_token_type_ids.repeat(n_vars, 1)
107
+ batch_x = rearrange(seqs, "b l c -> (b c) l")
108
+
109
+ # Note that Sundial can generate multiple probable predictions
110
+ forecast_length = 96
111
+ num_samples = 100
112
+
113
+
114
+ # For inference_token_len, you can refer to LightGTS (Periodic Patching).
115
+ # We recommend to use the period length as the inference_token_len.
116
+ output = model.generate(inputs=batch_x,text_input_ids=batch_input_ids,
117
+ text_attention_mask=batch_attention_mask,
118
+ text_token_type_ids=batch_token_type_ids,
119
+ max_output_length=forecast_length,
120
+ num_samples=num_samples,
121
+ inference_token_len=48)
122
+
123
+
124
+ # use raw predictions for mean/quantiles/confidence-interval estimation
125
+ print(output.shape)
126
+ ```
127
+
128
+
129
+
130
  #### From raw code
131
 
132
  We release the original code of Aurora in this repo. You can also download the pretrained checkpoints in our [huggingface](https://huggingface.co/DecisionIntelligence/Aurora) repo and put them in the folder: aurora/.
 
142
  ```python
143
  from huggingface_hub import snapshot_download
144
  import os
145
+ import torch
146
  # os.environ['HF_ENDPOINT'] = 'https://hf-mirror.com'
147
 
148
  # --- Configuration ---
 
186
 
187
  # prepare input
188
  batch_size, lookback_length = 1, 528
189
+ seqs = torch.randn(batch_size, lookback_length).cuda()
190
 
191
  # Note that Sundial can generate multiple probable predictions
192
  forecast_length = 96
 
200
 
201
  # use raw predictions for mean/quantiles/confidence-interval estimation
202
  print(output.shape)
203
+ ```
204
 
205
 
206
 
 
 
207
  ## Experiments
208
 
209
  You should refer to our [github repo](https://github.com/decisionintelligence/Aurora) for the complete experimental pipelines. For benchmarking (TSFM-Bench, ProbTS, TimeMMD, TFB, and EPF), you can install additional packages based on the requirement files under folders, and the datasets can be fetched from this [link](https://drive.google.com/file/d/12tJk858WaoG7ZVSvUq8KU1oHfGNJrARF/view?usp=drive_link). All experimental results can be reproduced by running the scripts in the benchmark folder:
 
225
  TFB/scripts/run_aurora_uni.sh
226
  ```
227
 
228
+
229
+
230
  ## Performance
231
 
232
  **Aurora ahieves consistent state-of-the-art performance on these 5 benchmarks:**