Update README.md
Browse files
README.md
CHANGED
|
@@ -24,29 +24,60 @@ pipeline_tag: text-generation
|
|
| 24 |
|
| 25 |
- **[2025/05/27]** 🎉 We release [**ConciseR-Zero-7B**](https://huggingface.co/Nickyang/ConciseR-Zero-7B) and [**ConciseR-Zero-7B-Preview**](https://huggingface.co/Nickyang/ConciseR-Zero-7B-Preview).
|
| 26 |
|
| 27 |
-
##
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 24 |
|
| 25 |
- **[2025/05/27]** 🎉 We release [**ConciseR-Zero-7B**](https://huggingface.co/Nickyang/ConciseR-Zero-7B) and [**ConciseR-Zero-7B-Preview**](https://huggingface.co/Nickyang/ConciseR-Zero-7B-Preview).
|
| 26 |
|
| 27 |
+
## Usage
|
| 28 |
+
|
| 29 |
+
```python
|
| 30 |
+
import vllm
|
| 31 |
+
|
| 32 |
+
|
| 33 |
+
def apply_template(question: str):
|
| 34 |
+
return ("""<|startoftext|>A conversation between User and Assistant. The User asks a question, and the Assistant solves it. \
|
| 35 |
+
The Assistant first thinks about the reasoning process in the mind and then provides the User with the answer. \
|
| 36 |
+
The reasoning process is enclosed within <think> </think> and answer is enclosed within <answer> </answer> tags, respectively, \
|
| 37 |
+
i.e., <think> reasoning process here </think> <answer> answer here </answer>. \
|
| 38 |
+
Please reason step by step, and put your final answer within \\boxed{}.
|
| 39 |
+
|
| 40 |
+
User:
|
| 41 |
+
{query}
|
| 42 |
+
|
| 43 |
+
Assistant:
|
| 44 |
+
""".replace("{query}", question))
|
| 45 |
+
|
| 46 |
+
model_name = "Nickyang/ConciseR-Zero-7B"
|
| 47 |
+
|
| 48 |
+
sampling_params = vllm.SamplingParams(
|
| 49 |
+
n=32,
|
| 50 |
+
temperature=0.6,
|
| 51 |
+
top_p=1.0,
|
| 52 |
+
max_tokens=3072,
|
| 53 |
+
)
|
| 54 |
+
|
| 55 |
+
model = vllm.LLM(
|
| 56 |
+
model_name,
|
| 57 |
+
max_model_len=4096,
|
| 58 |
+
dtype="bfloat16",
|
| 59 |
+
enable_prefix_caching=True,
|
| 60 |
+
)
|
| 61 |
+
|
| 62 |
+
prompts = [
|
| 63 |
+
"How many positive whole-number divisors does 196 have?"
|
| 64 |
+
]
|
| 65 |
+
prompts = list(map(apply_template, prompts))
|
| 66 |
+
outputs = model.generate(prompts, sampling_params)
|
| 67 |
+
|
| 68 |
+
print(outputs)
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
## Citation
|
| 72 |
+
|
| 73 |
+
```latex
|
| 74 |
+
@misc{song2025conciser,
|
| 75 |
+
title={Walk Before You Run! Concise LLM Reasoning via Reinforcement Learning},
|
| 76 |
+
author={Mingyang Song and Mao Zheng},
|
| 77 |
+
year={2025},
|
| 78 |
+
eprint={2505.21178},
|
| 79 |
+
archivePrefix={arXiv},
|
| 80 |
+
primaryClass={cs.CL},
|
| 81 |
+
url={https://arxiv.org/abs/2505.21178},
|
| 82 |
+
}
|
| 83 |
+
```
|