Add pipeline tag and links to paper/code
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,12 +1,14 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
library_name: transformers
|
|
|
|
|
|
|
| 4 |
---
|
|
|
|
| 5 |
# PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
|
| 6 |
|
| 7 |
<div align="center">
|
| 8 |
|
| 9 |
-
[**Read the Paper**](https://github.com/stepfun-ai/PaCoRe
|
| 10 |
|
| 11 |
</div>
|
| 12 |
|
|
@@ -42,7 +44,7 @@ We open-source model checkpoints, training data, and the full inference pipeline
|
|
| 42 |
|
| 43 |
**[2025/12/09]** We are excited to release the **PaCoRe-8B** ecosystem:
|
| 44 |
|
| 45 |
-
* 📝 **In-depth Technical Report:** [**PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning.**](https://
|
| 46 |
* 🤖 **Model:**
|
| 47 |
* [PaCoRe-8B](https://huggingface.co/stepfun-ai/PaCoRe-8B): Our final PaCoRe-trained model checkpoint!
|
| 48 |
* [RLVR-8B-0926](https://huggingface.co/stepfun-ai/RLVR-8B-0926): The initial checkpoint of our study, conducted strong reasoning-oriented post-trained on [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
|
|
@@ -154,7 +156,7 @@ You can directly use `vllm serve` to serve the model! More inference details of
|
|
| 154 |
|
| 155 |
*Figure 3 | Inference pipeline of PaCoRe. Each round launches broad parallel exploration, compacts the resulting trajectories into compacted messages, and feeds these messages together with the question forward to coordinate the next round. Repeating this process $\hat{R}$ times yields multi-million-token effective TTC while respecting fixed context limits, with the final compacted message serving as the system’s answer.*
|
| 156 |
|
| 157 |
-
|
| 158 |
|
| 159 |
|
| 160 |
## 🙏 Acknowledgements
|
|
@@ -183,9 +185,12 @@ If you are interested in our project and would like to contribute to the reasone
|
|
| 183 |
|
| 184 |
```bibtex
|
| 185 |
@misc{pacore2025,
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
|
|
|
|
|
|
|
|
|
| 190 |
}
|
| 191 |
```
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
library_name: transformers
|
| 3 |
+
license: mit
|
| 4 |
+
pipeline_tag: text-generation
|
| 5 |
---
|
| 6 |
+
|
| 7 |
# PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning
|
| 8 |
|
| 9 |
<div align="center">
|
| 10 |
|
| 11 |
+
[**Read the Paper**](https://arxiv.org/abs/2601.05593) | [**GitHub Repository**](https://github.com/stepfun-ai/PaCoRe) | [**Download Models**](https://huggingface.co/stepfun-ai/PaCoRe-8B) | [**Training Data**](https://huggingface.co/datasets/stepfun-ai/PaCoRe-Train-8k)
|
| 12 |
|
| 13 |
</div>
|
| 14 |
|
|
|
|
| 44 |
|
| 45 |
**[2025/12/09]** We are excited to release the **PaCoRe-8B** ecosystem:
|
| 46 |
|
| 47 |
+
* 📝 **In-depth Technical Report:** [**PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning.**](https://arxiv.org/abs/2601.05593)
|
| 48 |
* 🤖 **Model:**
|
| 49 |
* [PaCoRe-8B](https://huggingface.co/stepfun-ai/PaCoRe-8B): Our final PaCoRe-trained model checkpoint!
|
| 50 |
* [RLVR-8B-0926](https://huggingface.co/stepfun-ai/RLVR-8B-0926): The initial checkpoint of our study, conducted strong reasoning-oriented post-trained on [Qwen3-8B-Base](https://huggingface.co/Qwen/Qwen3-8B-Base).
|
|
|
|
| 156 |
|
| 157 |
*Figure 3 | Inference pipeline of PaCoRe. Each round launches broad parallel exploration, compacts the resulting trajectories into compacted messages, and feeds these messages together with the question forward to coordinate the next round. Repeating this process $\hat{R}$ times yields multi-million-token effective TTC while respecting fixed context limits, with the final compacted message serving as the system’s answer.*
|
| 158 |
|
| 159 |
+
For more details on the inference pipeline and examples, please refer to the [official GitHub repository](https://github.com/stepfun-ai/PaCoRe).
|
| 160 |
|
| 161 |
|
| 162 |
## 🙏 Acknowledgements
|
|
|
|
| 185 |
|
| 186 |
```bibtex
|
| 187 |
@misc{pacore2025,
|
| 188 |
+
title={PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning},
|
| 189 |
+
author={Jingcheng Hu and Yinmin Zhang and Shijie Shang and Xiaobo Yang and Yue Peng and Zhewei Huang and Hebin Zhou and Xin Wu and Jie Cheng and Fanqi Wan and Xiangwen Kong and Chengyuan Yao and Kaiwen Yan and Ailin Huang and Hongyu Zhou and Qi Han and Zheng Ge and Daxin Jiang and Xiangyu Zhang and Heung-Yeung Shum},
|
| 190 |
+
year={2026},
|
| 191 |
+
eprint={2601.05593},
|
| 192 |
+
archivePrefix={arXiv},
|
| 193 |
+
primaryClass={cs.LG},
|
| 194 |
+
url={https://arxiv.org/abs/2601.05593},
|
| 195 |
}
|
| 196 |
```
|