Thrillcrazyer commited on
Commit
2452585
·
verified ·
1 Parent(s): 76d15d5

Training in progress, step 50

Browse files
README.md CHANGED
@@ -1,5 +1,5 @@
1
  ---
2
- datasets: DeepMath-103k
3
  library_name: transformers
4
  model_name: Qwen-7B_THIP
5
  tags:
@@ -9,30 +9,60 @@ tags:
9
  licence: license
10
  ---
11
 
12
- <h1 align= "center"> Reasoning-Aware GRPO using Process Mining </h1>
13
 
14
- <p align="center">
15
- <a href="https://pnubaelab.github.io/"><b>BAELAB</b></a>, Pusan National University, Busan, Korea
16
- </p>
17
- <p align="center">
18
- Taekyhun Park<sup>*</sup> , Yongjae Lee<sup>*</sup>, Hyerim Bae<sup>&dagger;</sup>
19
- </p>
20
 
 
21
 
 
 
22
 
23
- <p align="center">
24
- <a href="https://github.com/Thrillcrazyer/THIP"><b>🌟 Github</b></a> |
25
- <a href="https://huggingface.co/Thrillcrazyer/Qwen-1.5B_THIP"><b>📥 1.5B Download</b></a> |
26
- <a href="https://huggingface.co/Thrillcrazyer/Qwen-1.5B_THIP"><b>📥 7B Download</b></a> |
27
- <a href="https://arxiv.org/abs/2510.25065"><b>📄 Arxiv Paper Link</b></a> |
28
- </p>
29
 
30
- # Abstract
31
 
32
- Reinforcement learning (RL)-based post-training has been crucial for enabling multi-step reasoning in large reasoning models (LRMs), yet current reward schemes are typically outcome-centric. We propose **PM4GRPO**, a reasoning-aware Group Relative Policy Optimization (GRPO) that augments standard answer/format rewards with signals over the reasoning procedure. To this end, process mining techniques are utilized to compute a scalar conformance reward that measures how closely a policy model's reasoning aligns with the pretrained teacher model. The empirical results on five benchmarks demonstrate that **PM4GRPO** significantly outperforms existing methodologies for GRPO-based post-training. These results highlight that leveraging process mining for reasoning-aware GRPO effectively enhances the reasoning capabilities of policy models.
33
 
34
- # Illustration of PM4GRPO
35
 
36
- <div align="center">
37
- <img src="https://arxiv.org/html/2510.25065v1/x1.png" width="600"/>
38
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ base_model: Thrillcrazyer/Qwen-7B_THIP
3
  library_name: transformers
4
  model_name: Qwen-7B_THIP
5
  tags:
 
9
  licence: license
10
  ---
11
 
12
+ # Model Card for Qwen-7B_THIP
13
 
14
+ This model is a fine-tuned version of [Thrillcrazyer/Qwen-7B_THIP](https://huggingface.co/Thrillcrazyer/Qwen-7B_THIP).
15
+ It has been trained using [TRL](https://github.com/huggingface/trl).
 
 
 
 
16
 
17
+ ## Quick start
18
 
19
+ ```python
20
+ from transformers import pipeline
21
 
22
+ question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
+ generator = pipeline("text-generation", model="Thrillcrazyer/Qwen-7B_THIP", device="cuda")
24
+ output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
+ print(output["generated_text"])
26
+ ```
 
27
 
28
+ ## Training procedure
29
 
30
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/pthpark1/TAC/runs/whtsmb9d)
31
 
 
32
 
33
+ This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
34
+
35
+ ### Framework versions
36
+
37
+ - TRL: 0.27.0
38
+ - Transformers: 4.57.6
39
+ - Pytorch: 2.8.0
40
+ - Datasets: 4.5.0
41
+ - Tokenizers: 0.22.2
42
+
43
+ ## Citations
44
+
45
+ Cite GRPO as:
46
+
47
+ ```bibtex
48
+ @article{shao2024deepseekmath,
49
+ title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
50
+ author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
51
+ year = 2024,
52
+ eprint = {arXiv:2402.03300},
53
+ }
54
+
55
+ ```
56
+
57
+ Cite TRL as:
58
+
59
+ ```bibtex
60
+ @misc{vonwerra2022trl,
61
+ title = {{TRL: Transformer Reinforcement Learning}},
62
+ author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
63
+ year = 2020,
64
+ journal = {GitHub repository},
65
+ publisher = {GitHub},
66
+ howpublished = {\url{https://github.com/huggingface/trl}}
67
+ }
68
+ ```
config.json CHANGED
@@ -52,7 +52,7 @@
52
  "rope_theta": 10000,
53
  "sliding_window": null,
54
  "tie_word_embeddings": false,
55
- "transformers_version": "4.57.1",
56
  "use_cache": true,
57
  "use_mrope": false,
58
  "use_sliding_window": false,
 
52
  "rope_theta": 10000,
53
  "sliding_window": null,
54
  "tie_word_embeddings": false,
55
+ "transformers_version": "4.57.6",
56
  "use_cache": true,
57
  "use_mrope": false,
58
  "use_sliding_window": false,
generation_config.json CHANGED
@@ -8,5 +8,5 @@
8
  "pad_token_id": 151643,
9
  "temperature": 0.6,
10
  "top_p": 0.95,
11
- "transformers_version": "4.57.1"
12
  }
 
8
  "pad_token_id": 151643,
9
  "temperature": 0.6,
10
  "top_p": 0.95,
11
+ "transformers_version": "4.57.6"
12
  }
model-00001-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ae3f733de4131deef1b2cb63aa102b48653bd3bf01ae4ac09870accaadd0d30d
3
  size 4877660776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:11a4f5f4ea26f09007057d60a769c6fbef7bc0df001c92d25e4e2dee75b7d569
3
  size 4877660776
model-00002-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:62d644447d298c8be54be5ddc441518660ffdae0b655dd19ec793ada3d747b68
3
  size 4932751008
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ea807fb19350c86aae87e5e5b162c25aa5b435b2f717a1e07601ee7686a19f74
3
  size 4932751008
model-00003-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:bf676d93247dbf99ff1d1cbca37981160af1c2ed260fdcdef3a410b3d9637623
3
  size 4330865200
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3f0f2b7c5ab74e9f0e045b3b1c85860450de81b72725df2045e6f1fdfa9f57e6
3
  size 4330865200
model-00004-of-00004.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:16dbca31381da9cb6c7bc5b23da6f9d6d9a3b5ecf975cdaa1432d71552c17437
3
  size 1089994880
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bc85e035063a45c3a98fa44d1b9d4f12ded755bb03c37b702ef13e9fc0cb0e83
3
  size 1089994880
tokenizer_config.json CHANGED
@@ -185,11 +185,17 @@
185
  "eos_token": "<|end▁of▁sentence|>",
186
  "extra_special_tokens": {},
187
  "legacy": true,
 
188
  "model_max_length": 16384,
 
189
  "pad_token": "<|end▁of▁sentence|>",
 
 
190
  "sp_model_kwargs": {},
 
191
  "tokenizer_class": "LlamaTokenizerFast",
192
  "truncation_side": "left",
 
193
  "unk_token": null,
194
  "use_default_system_prompt": false
195
  }
 
185
  "eos_token": "<|end▁of▁sentence|>",
186
  "extra_special_tokens": {},
187
  "legacy": true,
188
+ "max_length": 512,
189
  "model_max_length": 16384,
190
+ "pad_to_multiple_of": null,
191
  "pad_token": "<|end▁of▁sentence|>",
192
+ "pad_token_type_id": 0,
193
+ "padding_side": "left",
194
  "sp_model_kwargs": {},
195
+ "stride": 0,
196
  "tokenizer_class": "LlamaTokenizerFast",
197
  "truncation_side": "left",
198
+ "truncation_strategy": "longest_first",
199
  "unk_token": null,
200
  "use_default_system_prompt": false
201
  }
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e604235bce63f4814885835e394c83ec87b2108cd8832606c78a234cf03a59e8
3
- size 8593
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5b7839c52316e8f9028964b0dce717dc11ebd34c43ebffd1f90fc40998f0bfd
3
+ size 9169