Transformers
GGUF
llama-factory
full
Generated from Trainer
conversational
lbourdois commited on
Commit
7b27830
Β·
verified Β·
1 Parent(s): 4f99f61

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +128 -117
README.md CHANGED
@@ -1,117 +1,128 @@
1
-
2
- ---
3
-
4
- library_name: transformers
5
- license: apache-2.0
6
- base_model: Qwen/Qwen2.5-7B-Instruct
7
- tags:
8
- - llama-factory
9
- - full
10
- - generated_from_trainer
11
- model-index:
12
- - name: OpenThinker-7B
13
- results: []
14
- datasets:
15
- - open-thoughts/open-thoughts-114k
16
-
17
- ---
18
-
19
- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
20
-
21
-
22
- # QuantFactory/OpenThinker-7B-GGUF
23
- This is quantized version of [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) created using llama.cpp
24
-
25
- # Original Model Card
26
-
27
-
28
- <p align="center">
29
- <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
30
- </p>
31
-
32
- # OpenThinker-7B
33
-
34
- This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the
35
- [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset.
36
-
37
- The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts).
38
- More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
39
-
40
- This model improves upon the [Bespoke-Stratos-7B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-7B), which used 17k examples ([Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)).
41
- The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
42
-
43
- | | AIME24 | MATH500 | GPQA-Diamond | LCBv2 Easy | LCBv2 Medium | LCBv2 Hard | LCBv2 All |
44
- | --------------------------- | -------- | ------- | ------------ | ----------- | ------------- | ----------- | ---------- |
45
- | OpenThinker-7B | 31.3 | 83.0 | 42.4 | 75.3 | 28.6 | 6.5 | 39.9 |
46
- | Bespoke-Stratos-7B | 22.7 | 79.6 | 38.9 | 71.4 | 25.2 | 0.8 | 35.8 |
47
- | DeepSeek-R1-Distill-Qwen-7B | 60 | 88.2 | 46.9 | 79.7 | 45.1 | 14.6 | 50.1 |
48
- | gpt-4o-0513 | 8.7 | 75.8 | 46.5 | 87.4 | 42.7 | 8.9 | 50.5 |
49
- | o1-mini | 64 | 85.6 | 60 | 92.8 | 74.7 | 39.8 | 72.8 |
50
-
51
- We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available.
52
-
53
- | | Open Weights | Open Data | Open Code |
54
- |--|--------------|-----------| --------- |
55
- |OpenThinker-7B|βœ…|[βœ…](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[βœ…](https://github.com/open-thoughts/open-thoughts) |
56
- |Bespoke-Stratos-7B|βœ…|[βœ…](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)|[βœ…](https://github.com/bespokelabsai/curator/tree/main/examples/bespoke-stratos-data-generation)|
57
- |DeepSeek-R1-Distill-Qwen-7B|βœ…|❌|❌|
58
- |gpt-4o-0513|❌|❌|❌|❌|
59
- |o1-mini|❌|❌|❌|❌|
60
-
61
-
62
- ## Intended uses & limitations
63
-
64
- Apache 2.0 License
65
-
66
-
67
- ## Training procedure
68
-
69
- We used four 8xH100 nodes to train the model for 20 hours.
70
-
71
- ### Training hyperparameters
72
-
73
- The following hyperparameters were used during training:
74
- - learning_rate: 1e-05
75
- - train_batch_size: 1
76
- - eval_batch_size: 8
77
- - seed: 42
78
- - distributed_type: multi-GPU
79
- - num_devices: 32
80
- - gradient_accumulation_steps: 3
81
- - total_train_batch_size: 96
82
- - total_eval_batch_size: 256
83
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
84
- - lr_scheduler_type: cosine
85
- - lr_scheduler_warmup_ratio: 0.1
86
- - num_epochs: 3.0
87
-
88
- ### Framework versions
89
-
90
- - Transformers 4.46.1
91
- - Pytorch 2.3.0
92
- - Datasets 3.1.0
93
- - Tokenizers 0.20.3
94
-
95
- More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
96
-
97
- # Citation
98
- ```
99
- @misc{openthoughts,
100
- author = {Team, OpenThoughts},
101
- month = jan,
102
- title = {{Open Thoughts}},
103
- howpublished = {https://open-thoughts.ai},
104
- year = {2025}
105
- }
106
- ```
107
-
108
- # Links
109
- - πŸ“Š [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
110
- - πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
111
- - 🧠 [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
112
- - πŸ€– [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B) - this model.
113
- - πŸ“Š [Bespoke-Stratos Blog Post](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation)
114
- - 🧠 [Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)
115
- - πŸ€– [Bespoke-Stratos-32B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-32B)
116
- - πŸ€– [Bespoke-Stratos-7B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-7B)
117
-
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-7B-Instruct
5
+ tags:
6
+ - llama-factory
7
+ - full
8
+ - generated_from_trainer
9
+ datasets:
10
+ - open-thoughts/open-thoughts-114k
11
+ language:
12
+ - zho
13
+ - eng
14
+ - fra
15
+ - spa
16
+ - por
17
+ - deu
18
+ - ita
19
+ - rus
20
+ - jpn
21
+ - kor
22
+ - vie
23
+ - tha
24
+ - ara
25
+ model-index:
26
+ - name: OpenThinker-7B
27
+ results: []
28
+ ---
29
+
30
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
31
+
32
+
33
+ # QuantFactory/OpenThinker-7B-GGUF
34
+ This is quantized version of [open-thoughts/OpenThinker-7B](https://huggingface.co/open-thoughts/OpenThinker-7B) created using llama.cpp
35
+
36
+ # Original Model Card
37
+
38
+
39
+ <p align="center">
40
+ <img src="https://huggingface.co/datasets/open-thoughts/open-thoughts-114k/resolve/main/open_thoughts.png" width="50%">
41
+ </p>
42
+
43
+ # OpenThinker-7B
44
+
45
+ This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the
46
+ [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k) dataset.
47
+
48
+ The dataset is derived by distilling DeepSeek-R1 using the [data pipeline available on github](https://github.com/open-thoughts/open-thoughts).
49
+ More info about the dataset can be found on the dataset card at [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/open-thoughts-114k).
50
+
51
+ This model improves upon the [Bespoke-Stratos-7B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-7B), which used 17k examples ([Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)).
52
+ The numbers reported in the table below are evaluated with our open-source tool [Evalchemy](https://github.com/mlfoundations/Evalchemy).
53
+
54
+ | | AIME24 | MATH500 | GPQA-Diamond | LCBv2 Easy | LCBv2 Medium | LCBv2 Hard | LCBv2 All |
55
+ | --------------------------- | -------- | ------- | ------------ | ----------- | ------------- | ----------- | ---------- |
56
+ | OpenThinker-7B | 31.3 | 83.0 | 42.4 | 75.3 | 28.6 | 6.5 | 39.9 |
57
+ | Bespoke-Stratos-7B | 22.7 | 79.6 | 38.9 | 71.4 | 25.2 | 0.8 | 35.8 |
58
+ | DeepSeek-R1-Distill-Qwen-7B | 60 | 88.2 | 46.9 | 79.7 | 45.1 | 14.6 | 50.1 |
59
+ | gpt-4o-0513 | 8.7 | 75.8 | 46.5 | 87.4 | 42.7 | 8.9 | 50.5 |
60
+ | o1-mini | 64 | 85.6 | 60 | 92.8 | 74.7 | 39.8 | 72.8 |
61
+
62
+ We are fully open-source. Our [model weights](https://huggingface.co/open-thoughts), [datasets](https://huggingface.co/open-thoughts), [data generation code](https://github.com/open-thoughts/open-thoughts), [evaluation code](https://github.com/mlfoundations/Evalchemy), and [training code](https://github.com/hiyouga/LLaMA-Factory) are all publicly available.
63
+
64
+ | | Open Weights | Open Data | Open Code |
65
+ |--|--------------|-----------| --------- |
66
+ |OpenThinker-7B|βœ…|[βœ…](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)|[βœ…](https://github.com/open-thoughts/open-thoughts) |
67
+ |Bespoke-Stratos-7B|βœ…|[βœ…](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)|[βœ…](https://github.com/bespokelabsai/curator/tree/main/examples/bespoke-stratos-data-generation)|
68
+ |DeepSeek-R1-Distill-Qwen-7B|βœ…|❌|❌|
69
+ |gpt-4o-0513|❌|❌|❌|❌|
70
+ |o1-mini|❌|❌|❌|❌|
71
+
72
+
73
+ ## Intended uses & limitations
74
+
75
+ Apache 2.0 License
76
+
77
+
78
+ ## Training procedure
79
+
80
+ We used four 8xH100 nodes to train the model for 20 hours.
81
+
82
+ ### Training hyperparameters
83
+
84
+ The following hyperparameters were used during training:
85
+ - learning_rate: 1e-05
86
+ - train_batch_size: 1
87
+ - eval_batch_size: 8
88
+ - seed: 42
89
+ - distributed_type: multi-GPU
90
+ - num_devices: 32
91
+ - gradient_accumulation_steps: 3
92
+ - total_train_batch_size: 96
93
+ - total_eval_batch_size: 256
94
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
95
+ - lr_scheduler_type: cosine
96
+ - lr_scheduler_warmup_ratio: 0.1
97
+ - num_epochs: 3.0
98
+
99
+ ### Framework versions
100
+
101
+ - Transformers 4.46.1
102
+ - Pytorch 2.3.0
103
+ - Datasets 3.1.0
104
+ - Tokenizers 0.20.3
105
+
106
+ More info can be found in our repository: [https://github.com/open-thoughts/open-thoughts](https://github.com/open-thoughts/open-thoughts).
107
+
108
+ # Citation
109
+ ```
110
+ @misc{openthoughts,
111
+ author = {Team, OpenThoughts},
112
+ month = jan,
113
+ title = {{Open Thoughts}},
114
+ howpublished = {https://open-thoughts.ai},
115
+ year = {2025}
116
+ }
117
+ ```
118
+
119
+ # Links
120
+ - πŸ“Š [Open Thoughts Launch Blog Post](https://www.open-thoughts.ai/blog/launch)
121
+ - πŸ’» [Open Thoughts GitHub Repository](https://github.com/open-thoughts/open-thoughts)
122
+ - 🧠 [OpenThoughts-114k dataset](https://huggingface.co/datasets/open-thoughts/OpenThoughts-114k)
123
+ - πŸ€– [OpenThinker-7B model](https://huggingface.co/open-thoughts/OpenThinker-7B) - this model.
124
+ - πŸ“Š [Bespoke-Stratos Blog Post](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation)
125
+ - 🧠 [Bespoke-Stratos-17k dataset](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k)
126
+ - πŸ€– [Bespoke-Stratos-32B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-32B)
127
+ - πŸ€– [Bespoke-Stratos-7B model](https://huggingface.co/bespokelabs/Bespoke-Stratos-7B)
128
+