Nickyang commited on
Commit
02862af
·
verified ·
1 Parent(s): 6514441

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +57 -26
README.md CHANGED
@@ -24,29 +24,60 @@ pipeline_tag: text-generation
24
 
25
  - **[2025/05/27]** 🎉 We release [**ConciseR-Zero-7B**](https://huggingface.co/Nickyang/ConciseR-Zero-7B) and [**ConciseR-Zero-7B-Preview**](https://huggingface.co/Nickyang/ConciseR-Zero-7B-Preview).
26
 
27
- ## ✨Key Results
28
-
29
- We report Pass@1 accuracy averaged over 32 samples for each problem.
30
-
31
- | Model | AIME 2024 | MATH-500 | AMC 2023 | Minerva | Olympiad | Avg. Score |
32
- |-------|-----------|-----------|-----------|---------|----------|------------|
33
- | Qwen2.5-1.5B-Base | 0.0 | 3.3 | 2.5 | 1.8 | 1.5 | 1.82 |
34
- | Qwen2.5-1.5B-Instruct | 1.3 | 57.5 | 26.2 | 19.4 | 20.3 | 24.9 |
35
- | Qwen2.5-Math-1.5B-Base | 11.3 | 51.7 | 44.0 | 11.3 | 26.0 | 28.9 |
36
- | Qwen2.5-Math-1.5B-Instruct | 12.0 | 74.7 | 26.7 | 35.0 | 37.9 | 37.3 |
37
- | DeepSeek-R1-Distill-Qwen-1.5B | 28.8 | 82.8 | 62.9 | 26.5 | 43.3 | 48.9 |
38
- | DeepScaleR-1.5B-Preview | 43.1 | 87.8 | 73.6 | 30.2 | 50.0 | 56.9 |
39
- | FastCuRL-1.5B-Preview | 43.1 | 88.0 | 74.2 | 31.6 | 50.4 | 57.5 |
40
- | FastCuRL-1.5B-V3 | 49.6 | 90.5 | 78.5 | 34.7 | 54.5 | 61.6 |
41
- | | | | | | | |
42
- | Qwen2.5-7B-Base | 3.3 | 64.6 | 30.0 | 25.7 | 29.0 | 30.5 |
43
- | Qwen2.5-7B-Instruct | 12.3 | 77.1 | 52.8 | 34.9 | 38.7 | 43.2 |
44
- | Qwen2.5-Math-7B-Base | 20.7 | 64.3 | 56.2 | 17.3 | 29.0 | 37.5 |
45
- | Qwen2.5-Math-7B-Instruct | 15.7 | 82.9 | 67.0 | 35.0 | 41.3 | 48.4 |
46
- | Eurus-2-7B-PRIME | 17.8 | 80.1 | 63.0 | 37.5 | 43.9 | 48.5 |
47
- | Open-Reasoner-Zero-7B | 19.7 | 83.9 | 59.5 | 31.6 | 47.6 | 48.5 |
48
- | SimpleRL-Zero-7B | 14.0 | 77.9 | 58.0 | 33.0 | 39.0 | 44.4 |
49
- | SimpleRL-Zero-Math-7B | 22.7 | 76.9 | 62.2 | 30.1 | 39.3 | 46.2 |
50
- | Oat-Zero-7B | 28.0 | 79.4 | 66.2 | 34.4 | 43.8 | 50.4 |
51
- | ConciseR-Zero-7B-Preview (Stage-1) | 42.8 | 83.0 | 73.9 | 31.8 | 45.1 | 55.3 |
52
- | ConciseR-Zero-7B (Stage-2) | 43.3 | 83.0 | 76.7 | 31.5 | 46.0 | 56.1 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  - **[2025/05/27]** 🎉 We release [**ConciseR-Zero-7B**](https://huggingface.co/Nickyang/ConciseR-Zero-7B) and [**ConciseR-Zero-7B-Preview**](https://huggingface.co/Nickyang/ConciseR-Zero-7B-Preview).
26
 
27
+ ## Usage
28
+
29
+ ```python
30
+ import vllm
31
+
32
+
33
+ def apply_template(question: str):
34
+ return ("""<|startoftext|>A conversation between User and Assistant. The User asks a question, and the Assistant solves it. \
35
+ The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. \
36
+ The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, \
37
+ i.e., <think> reasoning process here </think> <answer> answer here </answer>. \
38
+ Please reason step by step, and put your final answer within \\boxed{}.
39
+
40
+ User:
41
+ {query}
42
+
43
+ Assistant:
44
+ """.replace("{query}", question))
45
+
46
+ model_name = "Nickyang/ConciseR-Zero-7B"
47
+
48
+ sampling_params = vllm.SamplingParams(
49
+ n=32,
50
+ temperature=0.6,
51
+ top_p=1.0,
52
+ max_tokens=3072,
53
+ )
54
+
55
+ model = vllm.LLM(
56
+ model_name,
57
+ max_model_len=4096,
58
+ dtype="bfloat16",
59
+ enable_prefix_caching=True,
60
+ )
61
+
62
+ prompts = [
63
+ "How many positive whole-number divisors does 196 have?"
64
+ ]
65
+ prompts = list(map(apply_template, prompts))
66
+ outputs = model.generate(prompts, sampling_params)
67
+
68
+ print(outputs)
69
+ ```
70
+
71
+ ## Citation
72
+
73
+ ```latex
74
+ @misc{song2025conciser,
75
+ title={Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning},
76
+ author={Mingyang Song and Mao Zheng},
77
+ year={2025},
78
+ eprint={2505.21178},
79
+ archivePrefix={arXiv},
80
+ primaryClass={cs.CL},
81
+ url={https://arxiv.org/abs/2505.21178},
82
+ }
83
+ ```