Add the practice
Browse files
README.md
CHANGED
|
@@ -45,27 +45,6 @@ Please try to reproduc the zero-shot experiments on ETTh2 [[here on Colab]](http
|
|
| 45 |
We use the following Colab page to show the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [[Colab]](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
|
| 46 |
|
| 47 |
|
| 48 |
-
## β³ Upcoming Features
|
| 49 |
-
|
| 50 |
-
- [β
] Parallel pre-training pipeline
|
| 51 |
-
- [] Probabilistic forecasting
|
| 52 |
-
- [] Multimodal dataset
|
| 53 |
-
- [] Multimodal pre-training script
|
| 54 |
-
|
| 55 |
-
## π News
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
- **Oct 2024**: π We've streamlined our code structure, enabling users to download the pre-trained model and perform zero-shot inference with a single line of code! Check out our [demo](./run_TEMPO_demo.py) for more details. Our model's download count on HuggingFace is now trackable!
|
| 59 |
-
|
| 60 |
-
- **Jun 2024**: π We added demos for reproducing zero-shot experiments in [Colab](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing). We also added the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [Colab](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
|
| 61 |
-
- **May 2024**: π TEMPO has launched a GUI-based online [demo](https://4171a8a7484b3e9148.gradio.live/), allowing users to directly interact with our foundation model!
|
| 62 |
-
- **May 2024**: π TEMPO published the 80M pretrained foundation model in [HuggingFace](https://huggingface.co/Melady/TEMPO)!
|
| 63 |
-
- **May 2024**: π§ͺ We added the code for pretraining and inference TEMPO models. You can find a pre-training script demo in [this folder](./scripts/etth2.sh). We also added [a script](./scripts/etth2_test.sh) for the inference demo.
|
| 64 |
-
|
| 65 |
-
- **Mar 2024**: π Released [TETS dataset](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link) from [S&P 500](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview) used in multimodal experiments in TEMPO.
|
| 66 |
-
- **Mar 2024**: π§ͺ TEMPO published the project [code](https://github.com/DC-research/TEMPO) and the pre-trained checkpoint [online](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link)!
|
| 67 |
-
- **Jan 2024**: π TEMPO [paper](https://openreview.net/pdf?id=YH5w12OUuU) get accepted by ICLR!
|
| 68 |
-
- **Oct 2023**: π TEMPO [paper](https://arxiv.org/pdf/2310.04948) released on Arxiv!
|
| 69 |
|
| 70 |
|
| 71 |
# Practice
|
|
@@ -136,7 +115,8 @@ print(predicted_values)
|
|
| 136 |
|
| 137 |
Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
|
| 138 |
|
| 139 |
-
|
|
|
|
| 140 |
|
| 141 |
## Practice on your end
|
| 142 |
|
|
@@ -191,6 +171,31 @@ Example of generated contextual information for the Company marked above:
|
|
| 191 |
You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
|
| 192 |
).
|
| 193 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 194 |
## Contact
|
| 195 |
Feel free to connect DefuCao@USC.EDU / YanLiu.CS@USC.EDU if youβre interested in applying TEMPO to your real-world application.
|
| 196 |
|
|
|
|
| 45 |
We use the following Colab page to show the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [[Colab]](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
|
| 46 |
|
| 47 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 48 |
|
| 49 |
|
| 50 |
# Practice
|
|
|
|
| 115 |
|
| 116 |
Please try our foundation model demo [[here]](https://4171a8a7484b3e9148.gradio.live).
|
| 117 |
|
| 118 |
+

|
| 119 |
+
|
| 120 |
|
| 121 |
## Practice on your end
|
| 122 |
|
|
|
|
| 171 |
You can download the processed data with text embedding from GPT2 from: [[TETS]](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link
|
| 172 |
).
|
| 173 |
|
| 174 |
+
## π News
|
| 175 |
+
|
| 176 |
+
|
| 177 |
+
- **Oct 2024**: π We've streamlined our code structure, enabling users to download the pre-trained model and perform zero-shot inference with a single line of code! Check out our [demo](./run_TEMPO_demo.py) for more details. Our model's download count on HuggingFace is now trackable!
|
| 178 |
+
|
| 179 |
+
- **Jun 2024**: π We added demos for reproducing zero-shot experiments in [Colab](https://colab.research.google.com/drive/11qGpT7H1JMaTlMlm9WtHFZ3_cJz7p-og?usp=sharing). We also added the demo of building the customer dataset and directly do the inference via our pre-trained foundation model: [Colab](https://colab.research.google.com/drive/1ZpWbK0L6mq1pav2yDqOuORo4rHbv80-A?usp=sharing)
|
| 180 |
+
- **May 2024**: π TEMPO has launched a GUI-based online [demo](https://4171a8a7484b3e9148.gradio.live/), allowing users to directly interact with our foundation model!
|
| 181 |
+
- **May 2024**: π TEMPO published the 80M pretrained foundation model in [HuggingFace](https://huggingface.co/Melady/TEMPO)!
|
| 182 |
+
- **May 2024**: π§ͺ We added the code for pretraining and inference TEMPO models. You can find a pre-training script demo in [this folder](./scripts/etth2.sh). We also added [a script](./scripts/etth2_test.sh) for the inference demo.
|
| 183 |
+
|
| 184 |
+
- **Mar 2024**: π Released [TETS dataset](https://drive.google.com/file/d/1Hu2KFj0kp4kIIpjbss2ciLCV_KiBreoJ/view?usp=drive_link) from [S&P 500](https://www.spglobal.com/spdji/en/indices/equity/sp-500/#overview) used in multimodal experiments in TEMPO.
|
| 185 |
+
- **Mar 2024**: π§ͺ TEMPO published the project [code](https://github.com/DC-research/TEMPO) and the pre-trained checkpoint [online](https://drive.google.com/file/d/11Ho_seP9NGh-lQCyBkvQhAQFy_3XVwKp/view?usp=drive_link)!
|
| 186 |
+
- **Jan 2024**: π TEMPO [paper](https://openreview.net/pdf?id=YH5w12OUuU) get accepted by ICLR!
|
| 187 |
+
- **Oct 2023**: π TEMPO [paper](https://arxiv.org/pdf/2310.04948) released on Arxiv!
|
| 188 |
+
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
## β³ Upcoming Features
|
| 192 |
+
|
| 193 |
+
- [β
] Parallel pre-training pipeline
|
| 194 |
+
- [] Probabilistic forecasting
|
| 195 |
+
- [] Multimodal dataset
|
| 196 |
+
- [] Multimodal pre-training script
|
| 197 |
+
|
| 198 |
+
|
| 199 |
## Contact
|
| 200 |
Feel free to connect DefuCao@USC.EDU / YanLiu.CS@USC.EDU if youβre interested in applying TEMPO to your real-world application.
|
| 201 |
|