Update README.md
Browse files
README.md
CHANGED
|
@@ -21,16 +21,16 @@ Aurora is a highly capable multimodal time series foundation model. Based on the
|
|
| 21 |
See **Figure 1**, to our best knowldege, Aurora is the first pretrained multimodal time series foundation model! Evaluated on three well-recognized benchmarks, including TimeMMD, TSFM-Bench, and ProbTS, Aurora is demonstrated the state-of-the-art.
|
| 22 |
|
| 23 |
|
| 24 |
-
<div align="
|
| 25 |
-
<img alt="intro" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/YdsPeh5mrn_lef19vQXfa.png" width="
|
| 26 |
</div>
|
| 27 |
|
| 28 |
## Architecture
|
| 29 |
|
| 30 |
In this work, we pretrain Aurora in a cross-modality paradigm, which adopts Channel-Independence on time series data, and models corresponding multimodal interaction to inject domain knowledge. Note that the each variable of time series is first normalized through Instance Normalization to mitigate the value discrepancy. See **Figure 2**, Aurora mainly consists of two phases: 1) in Aurora Encoder, we tokenize and encode each modality into modal features, then fuse them to form multimodal representations; 2) in Aurora Decoder, we utilize a Condition Decoder to obtain the multimodal conditions of future tokens, leverage a Prototype Retreiver to retrieve the future prototypes based on the domain knowledge, and conduct flow matching on them to make generative probabilistic forecasts.
|
| 31 |
|
| 32 |
-
<div align="
|
| 33 |
-
<img alt="intro" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/d82jT96jiGD0QL9s8RYg-.png" width="
|
| 34 |
</div>
|
| 35 |
|
| 36 |
## Quickstart
|
|
@@ -70,24 +70,24 @@ TFB/scripts/run_aurora_uni.sh
|
|
| 70 |
|
| 71 |
**Aurora ahieves consistent state-of-the-art performance on these 5 benchmarks:**
|
| 72 |
|
| 73 |
-
<div align="
|
| 74 |
-
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/Vh0ENMXJWwiPkWvMeeftG.png" width="
|
| 75 |
</div>
|
| 76 |
|
| 77 |
-
<div align="
|
| 78 |
-
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/2nPl7KumS6DU2lRzm8ACr.png" width="
|
| 79 |
</div>
|
| 80 |
|
| 81 |
-
<div align="
|
| 82 |
-
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/glgp6HoirIEO3yWBQD2Hw.png" width="
|
| 83 |
</div>
|
| 84 |
|
| 85 |
-
<div align="
|
| 86 |
-
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/RmOgS8recYalH-FjsfEOM.png" width="
|
| 87 |
</div>
|
| 88 |
|
| 89 |
-
<div align="
|
| 90 |
-
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/JatnUn_fSmD2eJdMPb68y.png" width="
|
| 91 |
</div>
|
| 92 |
|
| 93 |
|
|
|
|
| 21 |
See **Figure 1**, to our best knowldege, Aurora is the first pretrained multimodal time series foundation model! Evaluated on three well-recognized benchmarks, including TimeMMD, TSFM-Bench, and ProbTS, Aurora is demonstrated the state-of-the-art.
|
| 22 |
|
| 23 |
|
| 24 |
+
<div align="center">
|
| 25 |
+
<img alt="intro" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/YdsPeh5mrn_lef19vQXfa.png" width="60%"/>
|
| 26 |
</div>
|
| 27 |
|
| 28 |
## Architecture
|
| 29 |
|
| 30 |
In this work, we pretrain Aurora in a cross-modality paradigm, which adopts Channel-Independence on time series data, and models corresponding multimodal interaction to inject domain knowledge. Note that the each variable of time series is first normalized through Instance Normalization to mitigate the value discrepancy. See **Figure 2**, Aurora mainly consists of two phases: 1) in Aurora Encoder, we tokenize and encode each modality into modal features, then fuse them to form multimodal representations; 2) in Aurora Decoder, we utilize a Condition Decoder to obtain the multimodal conditions of future tokens, leverage a Prototype Retreiver to retrieve the future prototypes based on the domain knowledge, and conduct flow matching on them to make generative probabilistic forecasts.
|
| 31 |
|
| 32 |
+
<div align="center">
|
| 33 |
+
<img alt="intro" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/d82jT96jiGD0QL9s8RYg-.png" width="100%"/>
|
| 34 |
</div>
|
| 35 |
|
| 36 |
## Quickstart
|
|
|
|
| 70 |
|
| 71 |
**Aurora ahieves consistent state-of-the-art performance on these 5 benchmarks:**
|
| 72 |
|
| 73 |
+
<div align="center">
|
| 74 |
+
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/Vh0ENMXJWwiPkWvMeeftG.png" width="100%"/>
|
| 75 |
</div>
|
| 76 |
|
| 77 |
+
<div align="center">
|
| 78 |
+
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/2nPl7KumS6DU2lRzm8ACr.png" width="100%"/>
|
| 79 |
</div>
|
| 80 |
|
| 81 |
+
<div align="center">
|
| 82 |
+
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/glgp6HoirIEO3yWBQD2Hw.png" width="100%"/>
|
| 83 |
</div>
|
| 84 |
|
| 85 |
+
<div align="center">
|
| 86 |
+
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/RmOgS8recYalH-FjsfEOM.png" width="100%"/>
|
| 87 |
</div>
|
| 88 |
|
| 89 |
+
<div align="center">
|
| 90 |
+
<img alt="arch" src="https://cdn-uploads.huggingface.co/production/uploads/66276727368ec2a0b933772c/JatnUn_fSmD2eJdMPb68y.png" width="100%"/>
|
| 91 |
</div>
|
| 92 |
|
| 93 |
|