Files changed (1) hide show
  1. README.md +56 -56
README.md CHANGED
@@ -1,57 +1,57 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-1.5B-Instruct
3
- datasets: open-r1/OpenR1-Math-220k
4
- library_name: transformers
5
- model_name: Qwen2.5-1.5B-Open-R1-Distill-20kdata
6
- tags:
7
- - generated_from_trainer
8
- - open-r1
9
- - trl
10
- - sft
11
- licence: license
12
- ---
13
-
14
- # Model Card for Qwen2.5-1.5B-Open-R1-Distill
15
-
16
- This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset, only 20k of that used.
17
- It has been trained using [TRL](https://github.com/huggingface/trl).
18
-
19
- ## Quick start
20
-
21
- ```python
22
- from transformers import pipeline
23
-
24
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
25
- generator = pipeline("text-generation", model="roaminwind/Qwen2.5-1.5B-Open-R1-Distill-20kdata", device="cuda")
26
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
27
- print(output["generated_text"])
28
- ```
29
-
30
- ## Training procedure
31
-
32
- This model was trained with SFT.
33
-
34
- ### Framework versions
35
-
36
- - TRL: 0.16.0
37
- - Transformers: 4.50.0
38
- - Pytorch: 2.5.1
39
- - Datasets: 3.5.0
40
- - Tokenizers: 0.21.1
41
-
42
- ## Citations
43
-
44
-
45
-
46
- Cite TRL as:
47
-
48
- ```bibtex
49
- @misc{vonwerra2022trl,
50
- title = {{TRL: Transformer Reinforcement Learning}},
51
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
- year = 2020,
53
- journal = {GitHub repository},
54
- publisher = {GitHub},
55
- howpublished = {\url{https://github.com/huggingface/trl}}
56
- }
57
  ```
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-1.5B-Instruct
3
+ datasets: open-r1/OpenR1-Math-220k
4
+ library_name: transformers
5
+ model_name: Qwen2.5-1.5B-Open-R1-Distill-20kdata
6
+ tags:
7
+ - generated_from_trainer
8
+ - open-r1
9
+ - trl
10
+ - sft
11
+ licence: license
12
+ ---
13
+
14
+ # Model Card for Qwen2.5-1.5B-Open-R1-Distill-20kdata
15
+
16
+ This model is a fine-tuned version of [Qwen/Qwen2.5-1.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-1.5B-Instruct) on the [open-r1/OpenR1-Math-220k](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k) dataset, only 20k of that used.
17
+ It has been trained using [TRL](https://github.com/huggingface/trl).
18
+
19
+ ## Quick start
20
+
21
+ ```python
22
+ from transformers import pipeline
23
+
24
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
25
+ generator = pipeline("text-generation", model="roaminwind/Qwen2.5-1.5B-Open-R1-Distill-20kdata", device="cuda")
26
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
27
+ print(output["generated_text"])
28
+ ```
29
+
30
+ ## Training procedure
31
+
32
+ This model was trained with SFT.
33
+
34
+ ### Framework versions
35
+
36
+ - TRL: 0.16.0
37
+ - Transformers: 4.50.0
38
+ - Pytorch: 2.5.1
39
+ - Datasets: 3.5.0
40
+ - Tokenizers: 0.21.1
41
+
42
+ ## Citations
43
+
44
+
45
+
46
+ Cite TRL as:
47
+
48
+ ```bibtex
49
+ @misc{vonwerra2022trl,
50
+ title = {{TRL: Transformer Reinforcement Learning}},
51
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
+ year = 2020,
53
+ journal = {GitHub repository},
54
+ publisher = {GitHub},
55
+ howpublished = {\url{https://github.com/huggingface/trl}}
56
+ }
57
  ```