PlanePaper commited on
Commit
e81b5ba
·
verified ·
1 Parent(s): 0b04b1c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +80 -3
README.md CHANGED
@@ -1,3 +1,80 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ metrics:
6
+ - accuracy
7
+ base_model:
8
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
9
+ ---
10
+
11
+ # 🚀 GRPO-LEAD: Efficient Reasoning Enhancement for Mathematical Tasks
12
+
13
+ ---
14
+
15
+ ## 📚 Overview
16
+
17
+ **GRPO-LEAD** (**GRPO** with **L**ength-dependent rewards, **E**xplicit penalties, and **A**dvantage reweighting for **D**ifficulty) is an advanced reinforcement learning pipeline designed to fine-tune large language models (LLMs) for concise, accurate, and efficient reasoning in mathematical tasks.
18
+
19
+ ---
20
+
21
+ ## 📊 Performance Benchmarks
22
+
23
+ The following benchmarks were conducted on AIME24 and AIME25 datasets, evaluated with parameters: 14k maximum tokens, temperature of 0.6, min-p of 0.01, and 32 samples per question.
24
+
25
+ | **Model** | **AIME24 Cons@32** | **AIME24 Pass@1** | **AIME24 Avg. Length** | **AIME25 Cons@32** | **AIME25 Pass@1** | **AIME25 Avg. Length** |
26
+ |---------------------|--------------------|-------------------|------------------------|--------------------|-------------------|------------------------|
27
+ | **DeepSeek-Distlled-14B** | 0.800 | 0.614 | 9182 | 0.633 | 0.429 | 10046 |
28
+ | **Light-R1-14B-DS** | 0.833 | 0.641 | 9571 | 0.767 | 0.505 | 10194 |
29
+ | **LEAD-14B (ours)** | **0.867** | **0.650** | **8267** | **0.767** | **0.539** | **8668** |
30
+
31
+ Our GRPO-LEAD model achieves superior consistency and higher accuracy, demonstrating significantly improved reasoning efficiency as evidenced by shorter average reasoning lengths.
32
+
33
+ ---
34
+
35
+ ## ⚙️ Usage
36
+
37
+ To achieve the best performance in solving mathematical problems, simply use the following prompt format:
38
+ ```python
39
+ [
40
+ {
41
+ "role": "user",
42
+ "content": question + "\nLet's think step by step and output the final answer within \\boxed{}."
43
+ }
44
+ ]
45
+ ```
46
+
47
+ ---
48
+
49
+ ## 📂 Code and Documentation
50
+
51
+ For complete details, codebase, and usage examples, please visit our GitHub repository:
52
+
53
+ [**📌 GitHub Repository**](https://github.com/aeroplanepaper/GRPO-LEAD)
54
+
55
+ ---
56
+
57
+ ## 📦 Dataset: GRPO-LEAD-SFTData
58
+
59
+ We release [**GRPO-LEAD-SFTData**](https://huggingface.co/datasets/PlanePaper/GRPO-LEAD-SFTData), a curated collection of **12,153** high-quality mathematical reasoning samples for supervised fine-tuning. Generated via [**QwQ-32B**](https://huggingface.co/Qwen/QwQ-32B).
60
+ Derived primarily from the **DeepScaler** dataset ([DeepScaler](https://github.com/agentica-project/rllm)), we retain only examples with **difficulty > 1**, targeting challenging problem-solving scenarios. All entries are structured for seamless integration with [**LLaMA Factory**](https://github.com/hiyouga/LLaMA-Factory) and follow a standardized SFT-ready format.
61
+
62
+ Used as the training data for GRPO-LEAD’s supervised fine-tuning stage, this dataset is able to increase the model's base capability in solving mathematical problems.,
63
+
64
+ ---
65
+ ## 📖 Citation
66
+
67
+ If you find our work useful, please cite it as:
68
+
69
+ ```bibtex
70
+ @misc{zhang2025grpoleaddifficultyawarereinforcementlearning,
71
+ title={GRPO-LEAD: A Difficulty-Aware Reinforcement Learning Approach for Concise Mathematical Reasoning in Language Models},
72
+ author={Jixiao Zhang and Chunsheng Zuo},
73
+ year={2025},
74
+ eprint={2504.09696},
75
+ archivePrefix={arXiv},
76
+ primaryClass={cs.CL},
77
+ url={https://arxiv.org/abs/2504.09696},
78
+ }
79
+ ```
80
+ Enjoy exploring GRPO-LEAD! 🚀✨