Add model metadata and improve documentation

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +42 -47
README.md CHANGED
@@ -1,11 +1,21 @@
1
  ---
2
  license: mit
 
 
 
 
 
 
 
 
 
3
  ---
 
4
  # PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
5
 
6
  <div align="center">
7
 
8
- [**Read the Paper**](https://github.com/stepfun-ai/PaCoRe/blob/main/pacore_report.pdf) | [**Download Models**](https://huggingface.co/stepfun-ai/PaCoRe-8B) | [**Training Data**](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k)
9
 
10
  </div>
11
 
@@ -17,22 +27,22 @@ We introduce **PaCoRe (Parallel Coordinated Reasoning)**, a framework that shift
17
 
18
  Trained via large-scale, outcome-based reinforcement learning, PaCoRe masters the **Reasoning Synthesis** capabilities required to reconcile diverse parallel insights.
19
 
20
- The approach yields strong improvements across diverse domains, and notably pushes reasoning beyond frontier systems in mathematics: an 8B model reaches 94.5\% on HMMT 2025, surpassing GPT-5’s 93.2\% by scaling effective TTC to roughly two million tokens.
21
 
22
  We open-source model checkpoints, training data, and the full inference pipeline to accelerate follow-up work!
23
 
24
  ------
25
 
26
  <p align="center">
27
- <img src="figure/teaser_draft_02.png" width="48%" />
28
- <img src="figure/before_after_train_lcb_02.png" width="48%" />
29
  </p>
30
 
31
- *Figure 1 | Parallel Coordinated Reasoning (PaCoRe) performance. Left: On HMMT 2025, PaCoRe-8B demonstrates remarkable test-time scaling, yielding steady gains and ultimately surpassing GPT-5. Right: On LiveCodeBench, the RLVR-8B model fails to leverage increased test-time compute, while PaCoRe-8B model effectively unlocks substantial gains as the test-time compute increases.*
32
 
33
  <p align="center">
34
- <img src="figure/train_reward_response_length_1130.png" width="48%" />
35
- <img src="figure/benchmark_accuracy_1130.png" width="48%" />
36
  </p>
37
 
38
  *Figure 2 | PaCoRe Training dynamics. Left panels: The Training Reward and Response Length steadily increase, demonstrating the training stability and effectiveness. Right panels: Evaluation on HMMT 2025 and LiveCodeBench (2408-2505). Performance is reported using single round coordinated reasoning in PaCoRe inference setting with $\vec{K} = [16]$.*
@@ -41,7 +51,7 @@ We open-source model checkpoints, training data, and the full inference pipeline
41
 
42
  **[2025/12/09]** We are excited to release the **PaCoRe-8B** ecosystem:
43
 
44
- * 📝 **In-depth Technical Report:** [**PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning.**](https://github.com/stepfun-ai/PaCoRe/blob/main/pacore_report.pdf)
45
  * 🤖 **Model:**
46
  * [PaCoRe-8B](https://huggingface.co/stepfun-ai/PaCoRe-8B): Our final PaCoRe-trained model checkpoint!
47
  * [RLVR-8B-0926](https://huggingface.co/stepfun-ai/RLVR-8B-0926): The initial checkpoint of our study, conducted strong reasoning-oriented post-trained on [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
@@ -56,7 +66,7 @@ We open-source model checkpoints, training data, and the full inference pipeline
56
  <tr>
57
  <th class="tg-header"></th>
58
  <th class="tg-data">HMMT 2025</th>
59
- <th class="tg-data">LiveCodeBench</th>
60
  <th class="tg-data">HLE<sub>text</sub></th>
61
  <th class="tg-data">MultiChallenge</th>
62
  </tr>
@@ -130,8 +140,7 @@ We open-source model checkpoints, training data, and the full inference pipeline
130
  </tbody>
131
  </table>
132
 
133
- *Table 1 | For each benchmark, we report accuracy together with total TTC (in thousands). For *Low*, *Medium*, and *High*, we apply the inference trajectory configuration as $\vec{K}=[4]$, $[16]$, and $[32, 4]$ separately.*
134
-
135
 
136
  ### Key Findings
137
  * **Message Passing Unlocks Scaling.** Without compaction, performance flatlines at the context limit. PaCoRe breaks the memory barrier and lets reasoning scale freely.
@@ -139,52 +148,38 @@ We open-source model checkpoints, training data, and the full inference pipeline
139
  * **Data as a Force Multiplier.** The PaCoRe corpus provides exceptionally valuable supervision—even baseline models see substantial gains when trained on it.
140
 
141
  ## Getting Started 🚀
142
- ### Data
143
- The data is provided as a `list[dict]`, where each entry represents a training instance:
144
- * `conversation`: The original problem/prompt messages.
145
- * `responses`: A list of cached generated responses (trajectories). These serve as the **input messages ($M$)** used during PaCoRe training.
146
- * `ground_truth`: The verifiable answer used for correctness evaluation.
147
 
148
  ### Model Serving
149
- You can directly use `vllm serve` to serve the model! More inference details of PaCoRe will be handled in Inference Pipeline.
150
-
151
- ### Inference Pipeline
152
- ![](/figure/inference_pipeline_teaser_02.png)
153
-
154
- *Figure 3 | Inference pipeline of PaCoRe. Each round launches broad parallel exploration, compacts the resulting trajectories into compacted messages, and feeds these messages together with the question forward to coordinate the next round. Repeating this process $\hat{R}$ times yields multi-million-token effective TTC while respecting fixed context limits, with the final compacted message serving as the system’s answer.*
155
-
156
- Inference code coming soon!
157
 
 
 
 
 
 
158
 
159
  ## 🙏 Acknowledgements
160
  - This work was supported by computing resources and infrastructure provided by [StepFun](https://www.stepfun.com/) and Tsinghua University.
161
- - We are deeply grateful to our colleagues for their support:
162
- * Inference: Song Yuan, Wuxun Xie, Mingliang Li, Bojun Wang.
163
- * Training: Xing Chen, Yuanwei Lu, Changyi Wan, Yu Zhou.
164
- * Infra Operations: Shaoliang Pang, Changxin Miao, Xu Zhao, Wei Zhang, Zidong Yang, Junzhe Lin, Yuxiang Yang, Chen Xu, Xin Li, Bin Wang.
165
- * Data Management: Xiaoxiao Ren, Zhiguo Huang, and Kang An.
166
- * Helpful Discussions: Liang Zhao, Jianjian Sun, Zejia Weng, JingJing Xie.
167
- - We are grateful for colleagues from StepFun and Tsinghua University for their valuable feedback and contributions.
168
- - Our work is built on amazing open source models and data; thanks again!
169
-
170
- ## 🔮 Future Work
171
- We are just scratching the surface of parallel coordinated reasoning. Our roadmap includes:
172
- - **Scaling the Extremes**: We plan to apply PaCoRe to stronger foundation models, expanding the task domains, and further scaling up both the breadth (parallel trajectories) and depth (coordination rounds) to tackle challenges currently deemed unsolvable.
173
- - **Boosting Token Intelligence Density**: While we currently scale by volume, we aim to maximize the utility of every unit of compute spent. This involves enabling more efficient parallel exploration through better organization, cooperation, and division of labor among trajectories.
174
- - **Emergent Multi-Agent Intelligence**: We are interested in exploring the joint training of both the synthesis policy and the message-passing mechanism, laying minimal yet rich cooperative multi-agent learning environment, offering a valuable playground for studying emergent communication, self-organization, and collective intelligence.
175
- - **Ouroboros for Pre- and Post-Training**: we intend to investigate the development of advanced synthetic data generation techniques with PaCoRe pipeline to improve both current pretraining and post-training processes.
176
-
177
- ## Advertisement Time 📣
178
- We are currently seeking self-motivated engineers and reseachers.
179
- If you are interested in our project and would like to contribute to the reasoner scale-up all the way to AGI, please feel free to reach out to us at hanqer@stepfun.com
180
 
181
  ## 📜 Citation
182
 
183
  ```bibtex
184
  @misc{pacore2025,
185
- title={PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning},
186
- author={Jingcheng Hu and Yinmin Zhang and Shijie Shang and Xiaobo Yang and Yue Peng and Zhewei Huang and Hebin Zhou and Xin Wu and Jie Cheng and Fanqi Wan and Xiangwen Kong and Chengyuan Yao and Ailin Huang and Hongyu Zhou and Qi Han and Zheng Ge and Daxin Jiang and Xiangyu Zhang and Heung-Yeung Shum},
187
- year={2025},
188
- url={[https://github.com/stepfun-ai/PaCoRe/blob/main/pacore_report.pdf](https://github.com/stepfun-ai/PaCoRe/blob/main/pacore_report.pdf)},
 
 
 
189
  }
190
  ```
 
1
  ---
2
  license: mit
3
+ library_name: transformers
4
+ pipeline_tag: text-generation
5
+ base_model: Qwen/Qwen3-8B-Base
6
+ tags:
7
+ - reasoning
8
+ - test-time-compute
9
+ - pacore
10
+ - math
11
+ - code
12
  ---
13
+
14
  # PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
15
 
16
  <div align="center">
17
 
18
+ [**Read the Paper**](https://arxiv.org/abs/2601.05593) | [**Download Models**](https://huggingface.co/stepfun-ai/PaCoRe-8B) | [**Training Data**](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k) | [**GitHub**](https://github.com/stepfun-ai/PaCoRe)
19
 
20
  </div>
21
 
 
27
 
28
  Trained via large-scale, outcome-based reinforcement learning, PaCoRe masters the **Reasoning Synthesis** capabilities required to reconcile diverse parallel insights.
29
 
30
+ The approach yields strong improvements across diverse domains, and notably pushes reasoning beyond frontier systems in mathematics: an 8B model reaches 94.5% on HMMT 2025, surpassing GPT-5’s 93.2% by scaling effective TTC to roughly two million tokens.
31
 
32
  We open-source model checkpoints, training data, and the full inference pipeline to accelerate follow-up work!
33
 
34
  ------
35
 
36
  <p align="center">
37
+ <img src="https://raw.githubusercontent.com/stepfun-ai/PaCoRe/main/figure/teaser_draft_02.png" width="48%" />
38
+ <img src="https://raw.githubusercontent.com/stepfun-ai/PaCoRe/main/figure/before_after_train_lcb_02.png" width="48%" />
39
  </p>
40
 
41
+ *Figure 1 | Parallel Coordinated Reasoning (PaCoRe) performance. Left: On HMMT 2025, PaCoRe-8B demonstrates remarkable test-time scaling, yielding steady gains and ultimately surpassing GPT-5. Right: On LiveCodeBench, the RLVR-8B model fails to leverage increased test-time compute, while PaCoRe-8B model effectively unlocks substantial gains as the test-time compute increases.*
42
 
43
  <p align="center">
44
+ <img src="https://raw.githubusercontent.com/stepfun-ai/PaCoRe/main/figure/train_reward_response_length_1130.png" width="48%" />
45
+ <img src="https://raw.githubusercontent.com/stepfun-ai/PaCoRe/main/figure/benchmark_accuracy_1130.png" width="48%" />
46
  </p>
47
 
48
  *Figure 2 | PaCoRe Training dynamics. Left panels: The Training Reward and Response Length steadily increase, demonstrating the training stability and effectiveness. Right panels: Evaluation on HMMT 2025 and LiveCodeBench (2408-2505). Performance is reported using single round coordinated reasoning in PaCoRe inference setting with $\vec{K} = [16]$.*
 
51
 
52
  **[2025/12/09]** We are excited to release the **PaCoRe-8B** ecosystem:
53
 
54
+ * 📝 **In-depth Technical Report:** [**PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning.**](https://arxiv.org/abs/2601.05593)
55
  * 🤖 **Model:**
56
  * [PaCoRe-8B](https://huggingface.co/stepfun-ai/PaCoRe-8B): Our final PaCoRe-trained model checkpoint!
57
  * [RLVR-8B-0926](https://huggingface.co/stepfun-ai/RLVR-8B-0926): The initial checkpoint of our study, conducted strong reasoning-oriented post-trained on [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
 
66
  <tr>
67
  <th class="tg-header"></th>
68
  <th class="tg-data">HMMT 2025</th>
69
+ <th class="tg-data">LiveCodeBench (2408-2505)</th>
70
  <th class="tg-data">HLE<sub>text</sub></th>
71
  <th class="tg-data">MultiChallenge</th>
72
  </tr>
 
140
  </tbody>
141
  </table>
142
 
143
+ *Table 1 | For each benchmark, we report accuracy together with total TTC (in thousands). For Low, Medium, and High, we apply the inference trajectory configuration as $\vec{K}=[4]$, $[16]$, and $[32, 4]$ separately.*
 
144
 
145
  ### Key Findings
146
  * **Message Passing Unlocks Scaling.** Without compaction, performance flatlines at the context limit. PaCoRe breaks the memory barrier and lets reasoning scale freely.
 
148
  * **Data as a Force Multiplier.** The PaCoRe corpus provides exceptionally valuable supervision—even baseline models see substantial gains when trained on it.
149
 
150
  ## Getting Started 🚀
151
+ ### Installation
152
+ First, install the package from the official repository:
153
+ ```bash
154
+ pip install -e .
155
+ ```
156
 
157
  ### Model Serving
158
+ You can directly use `vllm serve` to serve the model:
159
+ ```bash
160
+ vllm serve stepfun-ai/PaCoRe-8B
161
+ ```
 
 
 
 
162
 
163
+ ### Inference Example
164
+ Next, you can run our example inference code with PaCoRe-low inference setting:
165
+ ```bash
166
+ python playground/example_batch_inference_pacore_low_1210.py
167
+ ```
168
 
169
  ## 🙏 Acknowledgements
170
  - This work was supported by computing resources and infrastructure provided by [StepFun](https://www.stepfun.com/) and Tsinghua University.
171
+ - We are built on amazing open source models and data; thanks again!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
172
 
173
  ## 📜 Citation
174
 
175
  ```bibtex
176
  @misc{pacore2025,
177
+ title={PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning},
178
+ author={Jingcheng Hu and Yinmin Zhang and Shijie Shang and Xiaobo Yang and Yue Peng and Zhewei Huang and Hebin Zhou and Xin Wu and Jie Cheng and Fanqi Wan and Xiangwen Kong and Chengyuan Yao and Kaiwen Yan and Ailin Huang and Hongyu Zhou and Qi Han and Zheng Ge and Daxin Jiang and Xiangyu Zhang and Heung-Yeung Shum},
179
+ year={2026},
180
+ eprint={2601.05593},
181
+ archivePrefix={arXiv},
182
+ primaryClass={cs.LG},
183
+ url={https://arxiv.org/abs/2601.05593},
184
  }
185
  ```