aositglf ChengsenWang commited on
Commit
677bb03
·
verified ·
0 Parent(s):

Duplicate from ChengsenWang/ChatTime-1-Pretrain-1M

Browse files

Co-authored-by: Chengsen Wang <ChengsenWang@users.noreply.huggingface.co>

Files changed (4) hide show
  1. .gitattributes +56 -0
  2. ChatTime-1-Pretrain-1M.csv +3 -0
  3. README.md +56 -0
  4. architecture.png +3 -0
.gitattributes ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ ChatTime-1-Pretrain-1M.csv filter=lfs diff=lfs merge=lfs -text
ChatTime-1-Pretrain-1M.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4fb9d02ffacbc8a9e30d1b5f632a8abd051ec1ce7608f2f8c8ba64e5ec3fbe25
3
+ size 6964660553
README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - time-series-forecasting
5
+ tags:
6
+ - time-series
7
+ - multimodality
8
+ - pretrained-model
9
+ - foundation-model
10
+ - multimodal-time-series-foundation-model
11
+ size_categories:
12
+ - 100K<n<1M
13
+ ---
14
+
15
+ # ChatTime: A Multimodal Time Series Foundation Model
16
+
17
+ ## ✨ Introduction
18
+
19
+ In this paper, we innovatively model time series as a foreign language and construct ChatTime, a unified framework for time series and text processing. As an out-of-the-box multimodal time series foundation model, ChatTime provides zero-shot forecasting capability and supports bimodal input/output for both time series and text. We design a series of experiments to verify the superior performance of ChatTime across multiple tasks and scenarios, and create four multimodal datasets to address data gaps. The experimental results demonstrate the potential and utility of ChatTime.
20
+
21
+ As depicted in Figure 1(b), during the continuous pre-training stage, we pre-train [LLaMA-2-7B-Base](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [ChengsenWang/ChatTime-1-Pretrain-1M](https://huggingface.co/datasets/ChengsenWang/ChatTime-1-Pretrain-1M), yielding [ChengsenWang/ChatTime-1-7B-Base](https://huggingface.co/ChengsenWang/ChatTime-1-7B-Base).
22
+
23
+ For details on ChatTime models, training data and procedures, and experimental results, please refer to the [arXiv](https://arxiv.org/abs/2412.11376).
24
+
25
+ ![](architecture.png)
26
+
27
+ ## 💾 Dataset
28
+
29
+ The data for continuous pre-training is sourced from two extensive open-source time series repositories, [Monash](https://forecastingdata.org/) and [TFB](https://github.com/decisionintelligence/TFB), encompassing approximately 100 sub-datasets. We apply sliding slices to the original time series using five distinct window and step sizes, as illustrated in the following table. We prioritize slicing the original time series into larger segments. Given the numerous repeating patterns and the limited computational resources, we perform K-means on 10M original time series slices. We categorize them into 1M and 25K groups, randomly selecting one sample from each group to serve as a representative. Consequently, we create a high-quality dataset for continuous pre-training (1M) and instruction fine-tuning (25K).
30
+
31
+ | Window Size | History Length | Prediction Length | Sliding Step |
32
+ | :---------: | :------------: | :---------------: | :----------: |
33
+ | 576 | 512 | 64 | 32 |
34
+ | 288 | 256 | 32 | 16 |
35
+ | 144 | 128 | 16 | 8 |
36
+ | 72 | 64 | 8 | 4 |
37
+ | 36 | 32 | 4 | 2 |
38
+
39
+ For details on pre-training dataset, please refer to the [arXiv](https://arxiv.org/abs/2412.11376).
40
+
41
+ ## 📝 Citation
42
+
43
+ If you find this repo or our work useful for your research, please consider citing the paper:
44
+
45
+ ```tex
46
+ @inproceedings{
47
+ author = {Chengsen Wang and Qi Qi and Jingyu Wang and Haifeng Sun and Zirui Zhuang and Jinming Wu and Lei Zhang and Jianxin Liao},
48
+ title = {ChatTime: A Unified Multimodal Time Series Foundation Model Bridging Numerical and Textual Data},
49
+ booktitle = {AAAI Conference on Artificial Intelligence},
50
+ year = {2025},
51
+ }
52
+ ```
53
+
54
+ ## 📪 Contact
55
+
56
+ If you have any question, please contact [cswang@bupt.edu.cn]().
architecture.png ADDED

Git LFS Details

  • SHA256: acc14adddd5e8986d8857509d1f3f731020ee423970c0848e488b865c6c6231b
  • Pointer size: 131 Bytes
  • Size of remote file: 310 kB