Add metadata, paper/GitHub links and data structure
#3
by
nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,190 +1,74 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
|
|
|
| 4 |
# PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
|
| 5 |
|
| 6 |
<div align="center">
|
| 7 |
|
| 8 |
-
[**Read the Paper**](https://github.com/stepfun-ai/PaCoRe
|
| 9 |
|
| 10 |
</div>
|
| 11 |
|
| 12 |
## 📖 Overview
|
| 13 |
|
| 14 |
-
We introduce **PaCoRe (Parallel Coordinated Reasoning)**, a framework that shifts the driver of inference from sequential depth to **coordinated parallel breadth**, breaking the model context limitation and massively scaling test time compute
|
| 15 |
-
* **Think in Parallel:** PaCoRe launches massive parallel exploration trajectories.
|
| 16 |
-
* **Coordinate in Multi-rounds:** It employs a message-passing architecture to compact these thoughts into concise messages and synthesize them to guide the next round.
|
| 17 |
-
|
| 18 |
-
Trained via large-scale, outcome-based reinforcement learning, PaCoRe masters the **Reasoning Synthesis** capabilities required to reconcile diverse parallel insights.
|
| 19 |
|
| 20 |
-
The
|
| 21 |
-
|
| 22 |
-
We open-source model checkpoints, training data, and the full inference pipeline to accelerate follow-up work!
|
| 23 |
|
| 24 |
------
|
| 25 |
|
| 26 |
<p align="center">
|
| 27 |
-
<img src="figure/teaser_draft_02.png" width="48%" />
|
| 28 |
-
<img src="figure/before_after_train_lcb_02.png" width="48%" />
|
| 29 |
</p>
|
| 30 |
|
| 31 |
-
*Figure 1 | Parallel Coordinated Reasoning (PaCoRe) performance
|
| 32 |
|
| 33 |
-
|
| 34 |
-
<img src="figure/train_reward_response_length_1130.png" width="48%" />
|
| 35 |
-
<img src="figure/benchmark_accuracy_1130.png" width="48%" />
|
| 36 |
-
</p>
|
| 37 |
|
| 38 |
-
*Figure 2 | PaCoRe Training dynamics. Left panels: The Training Reward and Response Length steadily increase, demonstrating the training stability and effectiveness. Right panels: Evaluation on HMMT 2025 and LiveCodeBench (2408-2505). Performance is reported using single round coordinated reasoning in PaCoRe inference setting with $\vec{K} = [16]$.*
|
| 39 |
-
|
| 40 |
-
## 🔥 Releases
|
| 41 |
-
|
| 42 |
-
**[2025/12/09]** We are excited to release the **PaCoRe-8B** ecosystem:
|
| 43 |
-
|
| 44 |
-
* 📝 **In-depth Technical Report:** [**PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning.**](https://github.com/stepfun-ai/PaCoRe/blob/main/pacore_report.pdf)
|
| 45 |
-
* 🤖 **Model:**
|
| 46 |
-
* [PaCoRe-8B](https://huggingface.co/stepfun-ai/PaCoRe-8B): Our final PaCoRe-trained model checkpoint!
|
| 47 |
-
* [RLVR-8B-0926](https://huggingface.co/stepfun-ai/RLVR-8B-0926): The initial checkpoint of our study, conducted strong reasoning-oriented post-trained on [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
|
| 48 |
-
* 📚 **Data:** [PaCoRe-Train-8k](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k) The high-quality training corpus, including `opensource_math`, `public_mathcontest`, `synthetic_math` and `code`:
|
| 49 |
-
* 🤗 Stage1-3k: [PaCoRe-Train-Stage1-3k](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k/stage1)
|
| 50 |
-
* 🤗 Stage2-5k: [PaCoRe-Train-Stage2-5k](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k/stage2)
|
| 51 |
-
|
| 52 |
-
## 🔍 Experiments
|
| 53 |
-
|
| 54 |
-
<table class="tg">
|
| 55 |
-
<thead>
|
| 56 |
-
<tr>
|
| 57 |
-
<th class="tg-header"></th>
|
| 58 |
-
<th class="tg-data">HMMT 2025</th>
|
| 59 |
-
<th class="tg-data">LiveCodeBench</th>
|
| 60 |
-
<th class="tg-data">HLE<sub>text</sub></th>
|
| 61 |
-
<th class="tg-data">MultiChallenge</th>
|
| 62 |
-
</tr>
|
| 63 |
-
</thead>
|
| 64 |
-
<tbody>
|
| 65 |
-
<tr>
|
| 66 |
-
<td class="tg-header">GPT-5</td>
|
| 67 |
-
<td class="tg-data">93.2 (16k)</td>
|
| 68 |
-
<td class="tg-data"><b>83.5</b> (13k)</td>
|
| 69 |
-
<td class="tg-data"><b>26.0</b> (14k)</td>
|
| 70 |
-
<td class="tg-data"><b>71.1</b> (5.0k)</td>
|
| 71 |
-
</tr>
|
| 72 |
-
<tr>
|
| 73 |
-
<td class="tg-header">Qwen3-235B-Thinking</td>
|
| 74 |
-
<td class="tg-data">82.3 (32k)</td>
|
| 75 |
-
<td class="tg-data">74.5 (21k)</td>
|
| 76 |
-
<td class="tg-data">18.2 (23k)</td>
|
| 77 |
-
<td class="tg-data">60.3 (1.6k)</td>
|
| 78 |
-
</tr>
|
| 79 |
-
<tr>
|
| 80 |
-
<td class="tg-header">GLM-4.6</td>
|
| 81 |
-
<td class="tg-data">88.7 (25k)</td>
|
| 82 |
-
<td class="tg-data">79.5 (19k)</td>
|
| 83 |
-
<td class="tg-data">17.2 (21k)</td>
|
| 84 |
-
<td class="tg-data">54.9 (2.2k)</td>
|
| 85 |
-
</tr>
|
| 86 |
-
<tr>
|
| 87 |
-
<td class="tg-header">DeepSeek-v3.1-Terminus</td>
|
| 88 |
-
<td class="tg-data">86.1 (20k)</td>
|
| 89 |
-
<td class="tg-data">74.9 (11k)</td>
|
| 90 |
-
<td class="tg-data">19.3 (18k)</td>
|
| 91 |
-
<td class="tg-data">54.4 (1.1k)</td>
|
| 92 |
-
</tr>
|
| 93 |
-
<tr class="tg-midrule">
|
| 94 |
-
<td class="tg-header">Kimi-K2-Thinking</td>
|
| 95 |
-
<td class="tg-data">86.5 (33k)</td>
|
| 96 |
-
<td class="tg-data">79.2 (25k)</td>
|
| 97 |
-
<td class="tg-data">23.9 (29k)</td>
|
| 98 |
-
<td class="tg-data">66.4 (1.7k)</td>
|
| 99 |
-
</tr>
|
| 100 |
-
|
| 101 |
-
<tr class="tg-midrule">
|
| 102 |
-
<td class="tg-header">RLVR-8B</td>
|
| 103 |
-
<td class="tg-data">75.4 (48k)</td>
|
| 104 |
-
<td class="tg-data">70.6 (34k)</td>
|
| 105 |
-
<td class="tg-data">9.3 (35k)</td>
|
| 106 |
-
<td class="tg-data">33.3 (1.7k)</td>
|
| 107 |
-
</tr>
|
| 108 |
-
|
| 109 |
-
<tr>
|
| 110 |
-
<td class="tg-header"><b>PaCoRe-8B (low)</b></td>
|
| 111 |
-
<td class="tg-data">88.2 (243k)</td>
|
| 112 |
-
<td class="tg-data">75.8 (188k)</td>
|
| 113 |
-
<td class="tg-data">13.0 (196k)</td>
|
| 114 |
-
<td class="tg-data">41.8 (13k)</td>
|
| 115 |
-
</tr>
|
| 116 |
-
<tr>
|
| 117 |
-
<td class="tg-header"><b>PaCoRe-8B (medium)</b></td>
|
| 118 |
-
<td class="tg-data">92.9 (869k)</td>
|
| 119 |
-
<td class="tg-data">76.7 (659k)</td>
|
| 120 |
-
<td class="tg-data">14.6 (694k)</td>
|
| 121 |
-
<td class="tg-data">45.7 (45k)</td>
|
| 122 |
-
</tr>
|
| 123 |
-
<tr class="tg-bottom">
|
| 124 |
-
<td class="tg-header"><b>PaCoRe-8B (high)</b></td>
|
| 125 |
-
<td class="tg-data"><b>94.5</b> (1796k)</td>
|
| 126 |
-
<td class="tg-data">78.2 (1391k)</td>
|
| 127 |
-
<td class="tg-data">16.2 (1451k)</td>
|
| 128 |
-
<td class="tg-data">47.0 (95k)</td>
|
| 129 |
-
</tr>
|
| 130 |
-
</tbody>
|
| 131 |
-
</table>
|
| 132 |
-
|
| 133 |
-
*Table 1 | For each benchmark, we report accuracy together with total TTC (in thousands). For *Low*, *Medium*, and *High*, we apply the inference trajectory configuration as $\vec{K}=[4]$, $[16]$, and $[32, 4]$ separately.*
|
| 134 |
-
|
| 135 |
-
|
| 136 |
-
### Key Findings
|
| 137 |
-
* **Message Passing Unlocks Scaling.** Without compaction, performance flatlines at the context limit. PaCoRe breaks the memory barrier and lets reasoning scale freely.
|
| 138 |
-
* **Breadth > Depth.** All compute is not equal. Coordinated parallel reasoning delivers far higher returns than extending a single chain.
|
| 139 |
-
* **Data as a Force Multiplier.** The PaCoRe corpus provides exceptionally valuable supervision—even baseline models see substantial gains when trained on it.
|
| 140 |
-
|
| 141 |
-
## Getting Started 🚀
|
| 142 |
-
### Data
|
| 143 |
The data is provided as a `list[dict]`, where each entry represents a training instance:
|
| 144 |
-
* `conversation`: The original problem/prompt messages.
|
| 145 |
-
* `responses`: A list of cached generated responses (trajectories). These serve as the **input messages ($M$)** used during PaCoRe training.
|
| 146 |
-
* `ground_truth`: The verifiable answer used for correctness evaluation.
|
| 147 |
-
|
| 148 |
-
### Model Serving
|
| 149 |
-
You can directly use `vllm serve` to serve the model! More inference details of PaCoRe will be handled in Inference Pipeline.
|
| 150 |
-
|
| 151 |
-
### Inference Pipeline
|
| 152 |
-

|
| 153 |
-
|
| 154 |
-
*Figure 3 | Inference pipeline of PaCoRe. Each round launches broad parallel exploration, compacts the resulting trajectories into compacted messages, and feeds these messages together with the question forward to coordinate the next round. Repeating this process $\hat{R}$ times yields multi-million-token effective TTC while respecting fixed context limits, with the final compacted message serving as the system’s answer.*
|
| 155 |
-
|
| 156 |
-
Inference code coming soon!
|
| 157 |
|
|
|
|
|
|
|
|
|
|
| 158 |
|
| 159 |
-
|
| 160 |
-
-
|
| 161 |
-
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
* Infra Operations: Shaoliang Pang, Changxin Miao, Xu Zhao, Wei Zhang, Zidong Yang, Junzhe Lin, Yuxiang Yang, Chen Xu, Xin Li, Bin Wang.
|
| 165 |
-
* Data Management: Xiaoxiao Ren, Zhiguo Huang, and Kang An.
|
| 166 |
-
* Helpful Discussions: Liang Zhao, Jianjian Sun, Zejia Weng, JingJing Xie.
|
| 167 |
-
- We are grateful for colleagues from StepFun and Tsinghua University for their valuable feedback and contributions.
|
| 168 |
-
- Our work is built on amazing open source models and data; thanks again!
|
| 169 |
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
- **Emergent Multi-Agent Intelligence**: We are interested in exploring the joint training of both the synthesis policy and the message-passing mechanism, laying minimal yet rich cooperative multi-agent learning environment, offering a valuable playground for studying emergent communication, self-organization, and collective intelligence.
|
| 175 |
-
- **Ouroboros for Pre- and Post-Training**: we intend to investigate the development of advanced synthetic data generation techniques with PaCoRe pipeline to improve both current pretraining and post-training processes.
|
| 176 |
|
| 177 |
-
##
|
| 178 |
-
|
| 179 |
-
|
|
|
|
| 180 |
|
| 181 |
## 📜 Citation
|
| 182 |
|
| 183 |
```bibtex
|
| 184 |
@misc{pacore2025,
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
|
|
|
|
|
|
|
|
|
| 189 |
}
|
| 190 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- text-generation
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
tags:
|
| 8 |
+
- math
|
| 9 |
+
- code
|
| 10 |
+
- reasoning
|
| 11 |
+
- test-time-compute
|
| 12 |
---
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
# PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
|
| 16 |
|
| 17 |
<div align="center">
|
| 18 |
|
| 19 |
+
[**Read the Paper**](https://huggingface.co/papers/2601.05593) | [**GitHub Repository**](https://github.com/stepfun-ai/PaCoRe) | [**Download Models**](https://huggingface.co/stepfun-ai/PaCoRe-8B) | [**Training Data**](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k)
|
| 20 |
|
| 21 |
</div>
|
| 22 |
|
| 23 |
## 📖 Overview
|
| 24 |
|
| 25 |
+
We introduce **PaCoRe (Parallel Coordinated Reasoning)**, a framework that shifts the driver of inference from sequential depth to **coordinated parallel breadth**, breaking the model context limitation and massively scaling test time compute.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
+
The **PaCoRe-Train-8k** dataset is the high-quality training corpus used to train the model to master the **Reasoning Synthesis** capabilities required to reconcile diverse parallel insights. It includes approximately 8,000 instances across mathematics and coding domains.
|
|
|
|
|
|
|
| 28 |
|
| 29 |
------
|
| 30 |
|
| 31 |
<p align="center">
|
| 32 |
+
<img src="https://raw.githubusercontent.com/stepfun-ai/PaCoRe/main/figure/teaser_draft_02.png" width="48%" />
|
| 33 |
+
<img src="https://raw.githubusercontent.com/stepfun-ai/PaCoRe/main/figure/before_after_train_lcb_02.png" width="48%" />
|
| 34 |
</p>
|
| 35 |
|
| 36 |
+
*Figure 1 | Parallel Coordinated Reasoning (PaCoRe) performance.*
|
| 37 |
|
| 38 |
+
## 📚 Dataset Structure
|
|
|
|
|
|
|
|
|
|
| 39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 40 |
The data is provided as a `list[dict]`, where each entry represents a training instance:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 41 |
|
| 42 |
+
* **`conversation`**: The original problem or prompt messages.
|
| 43 |
+
* **`responses`**: A list of cached generated responses (trajectories). These serve as the **input messages ($M$)** used during PaCoRe training to teach the model how to synthesize parallel thoughts.
|
| 44 |
+
* **`ground_truth`**: The verifiable answer used for correctness evaluation during the Reinforcement Learning (RL) process.
|
| 45 |
|
| 46 |
+
The corpus includes:
|
| 47 |
+
- `opensource_math`
|
| 48 |
+
- `public_mathcontest`
|
| 49 |
+
- `synthetic_math`
|
| 50 |
+
- `code`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
|
| 52 |
+
### Releases
|
| 53 |
+
The data is released in two stages:
|
| 54 |
+
* 🤗 **Stage 1 (3k)**: [PaCoRe-Train-Stage1-3k](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k/stage1)
|
| 55 |
+
* 🤗 **Stage 2 (5k)**: [PaCoRe-Train-Stage2-5k](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k/stage2)
|
|
|
|
|
|
|
| 56 |
|
| 57 |
+
## 🔍 Key Findings
|
| 58 |
+
* **Message Passing Unlocks Scaling**: Without compaction, performance flatlines at the context limit. PaCoRe breaks the memory barrier.
|
| 59 |
+
* **Breadth > Depth**: Coordinated parallel reasoning delivers higher returns than extending a single chain.
|
| 60 |
+
* **Data as a Force Multiplier**: The PaCoRe corpus provides exceptionally valuable supervision—even baseline models see substantial gains when trained on it.
|
| 61 |
|
| 62 |
## 📜 Citation
|
| 63 |
|
| 64 |
```bibtex
|
| 65 |
@misc{pacore2025,
|
| 66 |
+
title={PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning},
|
| 67 |
+
author={Jingcheng Hu and Yinmin Zhang and Shijie Shang and Xiaobo Yang and Yue Peng and Zhewei Huang and Hebin Zhou and Xin Wu and Jie Cheng and Fanqi Wan and Xiangwen Kong and Chengyuan Yao and Kaiwen Yan and Ailin Huang and Hongyu Zhou and Qi Han and Zheng Ge and Daxin Jiang and Xiangyu Zhang and Heung-Yeung Shum},
|
| 68 |
+
year={2026},
|
| 69 |
+
eprint={2601.05593},
|
| 70 |
+
archivePrefix={arXiv},
|
| 71 |
+
primaryClass={cs.LG},
|
| 72 |
+
url={https://arxiv.org/abs/2601.05593},
|
| 73 |
}
|
| 74 |
```
|