Improve language tag

#1
by lbourdois - opened
Files changed (1) hide show
  1. README.md +80 -66
README.md CHANGED
@@ -1,67 +1,81 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-1.5B
3
- datasets: xiaodongguaAIGC/X-R1-7500
4
- library_name: transformers
5
- tags:
6
- - generated_from_trainer
7
- - X-R1
8
- licence: license
9
- ---
10
-
11
- # Model Card for None
12
-
13
- This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the [xiaodongguaAIGC/X-R1-7500](https://huggingface.co/datasets/xiaodongguaAIGC/X-R1-7500) dataset.
14
- It has been trained using [TRL](https://github.com/huggingface/trl).
15
-
16
- ## Quick start
17
-
18
- ```python
19
- from transformers import pipeline
20
-
21
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
22
- generator = pipeline("text-generation", model="None", device="cuda")
23
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
24
- print(output["generated_text"])
25
- ```
26
-
27
- ## Training procedure
28
-
29
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/smartrichard_team1/huggingface/runs/rx351n7r)
30
-
31
-
32
- This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
33
-
34
- ### Framework versions
35
-
36
- - TRL: 0.15.0
37
- - Transformers: 4.48.2
38
- - Pytorch: 2.5.1
39
- - Datasets: 3.3.2
40
- - Tokenizers: 0.21.0
41
-
42
- ## Citations
43
-
44
- Cite GRPO as:
45
-
46
- ```bibtex
47
- @article{zhihong2024deepseekmath,
48
- title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
49
- author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
50
- year = 2024,
51
- eprint = {arXiv:2402.03300},
52
- }
53
-
54
- ```
55
-
56
- Cite TRL as:
57
-
58
- ```bibtex
59
- @misc{vonwerra2022trl,
60
- title = {{TRL: Transformer Reinforcement Learning}},
61
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
62
- year = 2020,
63
- journal = {GitHub repository},
64
- publisher = {GitHub},
65
- howpublished = {\url{https://github.com/huggingface/trl}}
66
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
  ```
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-1.5B
3
+ datasets: xiaodongguaAIGC/X-R1-7500
4
+ library_name: transformers
5
+ tags:
6
+ - generated_from_trainer
7
+ - X-R1
8
+ licence: license
9
+ language:
10
+ - zho
11
+ - eng
12
+ - fra
13
+ - spa
14
+ - por
15
+ - deu
16
+ - ita
17
+ - rus
18
+ - jpn
19
+ - kor
20
+ - vie
21
+ - tha
22
+ - ara
23
+ ---
24
+
25
+ # Model Card for None
26
+
27
+ This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B](https://huggingface.co/Qwen/Qwen2.5-1.5B) on the [xiaodongguaAIGC/X-R1-7500](https://huggingface.co/datasets/xiaodongguaAIGC/X-R1-7500) dataset.
28
+ It has been trained using [TRL](https://github.com/huggingface/trl).
29
+
30
+ ## Quick start
31
+
32
+ ```python
33
+ from transformers import pipeline
34
+
35
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
36
+ generator = pipeline("text-generation", model="None", device="cuda")
37
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
38
+ print(output["generated_text"])
39
+ ```
40
+
41
+ ## Training procedure
42
+
43
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/smartrichard_team1/huggingface/runs/rx351n7r)
44
+
45
+
46
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
47
+
48
+ ### Framework versions
49
+
50
+ - TRL: 0.15.0
51
+ - Transformers: 4.48.2
52
+ - Pytorch: 2.5.1
53
+ - Datasets: 3.3.2
54
+ - Tokenizers: 0.21.0
55
+
56
+ ## Citations
57
+
58
+ Cite GRPO as:
59
+
60
+ ```bibtex
61
+ @article{zhihong2024deepseekmath,
62
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
63
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
64
+ year = 2024,
65
+ eprint = {arXiv:2402.03300},
66
+ }
67
+
68
+ ```
69
+
70
+ Cite TRL as:
71
+
72
+ ```bibtex
73
+ @misc{vonwerra2022trl,
74
+ title = {{TRL: Transformer Reinforcement Learning}},
75
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
76
+ year = 2020,
77
+ journal = {GitHub repository},
78
+ publisher = {GitHub},
79
+ howpublished = {\url{https://github.com/huggingface/trl}}
80
+ }
81
  ```