michlea commited on
Commit
fef8bbb
·
verified ·
1 Parent(s): ca92362

Training in progress, step 100

Browse files
README.md CHANGED
@@ -1,22 +1,69 @@
1
  ---
2
  base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
 
 
3
  tags:
4
- - text-generation-inference
5
- - transformers
6
- - unsloth
7
- - qwen2
8
  - trl
9
- license: apache-2.0
10
- language:
11
- - en
12
  ---
13
 
14
- # Uploaded model
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
- - **Developed by:** michlea
17
- - **License:** apache-2.0
18
- - **Finetuned from model :** unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
 
 
 
 
19
 
20
- This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
21
 
22
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  base_model: unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit
3
+ library_name: transformers
4
+ model_name: HidatoQwenModel
5
  tags:
6
+ - generated_from_trainer
7
+ - grpo
 
 
8
  - trl
9
+ - unsloth
10
+ licence: license
 
11
  ---
12
 
13
+ # Model Card for HidatoQwenModel
14
+
15
+ This model is a fine-tuned version of [unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit](https://huggingface.co/unsloth/qwen2.5-3b-instruct-unsloth-bnb-4bit).
16
+ It has been trained using [TRL](https://github.com/huggingface/trl).
17
+
18
+ ## Quick start
19
+
20
+ ```python
21
+ from transformers import pipeline
22
+
23
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
24
+ generator = pipeline("text-generation", model="michlea/HidatoQwenModel", device="cuda")
25
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
26
+ print(output["generated_text"])
27
+ ```
28
+
29
+ ## Training procedure
30
+
31
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/michlea-hse-spb/huggingface/runs/m65yzfna)
32
+
33
+
34
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
35
+
36
+ ### Framework versions
37
+
38
+ - TRL: 0.23.0
39
+ - Transformers: 4.55.4
40
+ - Pytorch: 2.7.1
41
+ - Datasets: 3.6.0
42
+ - Tokenizers: 0.21.4
43
+
44
+ ## Citations
45
+
46
+ Cite GRPO as:
47
 
48
+ ```bibtex
49
+ @article{shao2024deepseekmath,
50
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
51
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
52
+ year = 2024,
53
+ eprint = {arXiv:2402.03300},
54
+ }
55
 
56
+ ```
57
 
58
+ Cite TRL as:
59
+
60
+ ```bibtex
61
+ @misc{vonwerra2022trl,
62
+ title = {{TRL: Transformer Reinforcement Learning}},
63
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
64
+ year = 2020,
65
+ journal = {GitHub repository},
66
+ publisher = {GitHub},
67
+ howpublished = {\url{https://github.com/huggingface/trl}}
68
+ }
69
+ ```
adapter_config.json CHANGED
@@ -25,13 +25,13 @@
25
  "rank_pattern": {},
26
  "revision": null,
27
  "target_modules": [
 
 
28
  "down_proj",
29
- "up_proj",
30
- "o_proj",
31
- "v_proj",
32
  "k_proj",
33
- "q_proj",
34
- "gate_proj"
 
35
  ],
36
  "target_parameters": null,
37
  "task_type": "CAUSAL_LM",
 
25
  "rank_pattern": {},
26
  "revision": null,
27
  "target_modules": [
28
+ "q_proj",
29
+ "gate_proj",
30
  "down_proj",
 
 
 
31
  "k_proj",
32
+ "o_proj",
33
+ "up_proj",
34
+ "v_proj"
35
  ],
36
  "target_parameters": null,
37
  "task_type": "CAUSAL_LM",
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:527d0c067274c84ccf2c65b6b7639e7d3f4a6595a6b57e47519521dfd3b6a1ff
3
  size 479005064
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6c7c39d277354d47f1a92038c07c003f26416ccff295aeb12f614d1876f2f0a8
3
  size 479005064
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:988c8002f08927e1ff0bcd03c404e65e0ea558a393fe5b18457b2b40d64758dc
3
+ size 7249