PaCoRe-Train-8k / README.md
nielsr's picture
nielsr HF Staff
Add metadata, paper/GitHub links and data structure
520eb7a verified
|
raw
history blame
3.34 kB
metadata
license: mit
task_categories:
  - text-generation
language:
  - en
tags:
  - math
  - code
  - reasoning
  - test-time-compute

PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning

๐Ÿ“– Overview

We introduce PaCoRe (Parallel Coordinated Reasoning), a framework that shifts the driver of inference from sequential depth to coordinated parallel breadth, breaking the model context limitation and massively scaling test time compute.

The PaCoRe-Train-8k dataset is the high-quality training corpus used to train the model to master the Reasoning Synthesis capabilities required to reconcile diverse parallel insights. It includes approximately 8,000 instances across mathematics and coding domains.


Figure 1 | Parallel Coordinated Reasoning (PaCoRe) performance.

๐Ÿ“š Dataset Structure

The data is provided as a list[dict], where each entry represents a training instance:

  • conversation: The original problem or prompt messages.
  • responses: A list of cached generated responses (trajectories). These serve as the input messages ($M$) used during PaCoRe training to teach the model how to synthesize parallel thoughts.
  • ground_truth: The verifiable answer used for correctness evaluation during the Reinforcement Learning (RL) process.

The corpus includes:

  • opensource_math
  • public_mathcontest
  • synthetic_math
  • code

Releases

The data is released in two stages:

๐Ÿ” Key Findings

  • Message Passing Unlocks Scaling: Without compaction, performance flatlines at the context limit. PaCoRe breaks the memory barrier.
  • Breadth > Depth: Coordinated parallel reasoning delivers higher returns than extending a single chain.
  • Data as a Force Multiplier: The PaCoRe corpus provides exceptionally valuable supervisionโ€”even baseline models see substantial gains when trained on it.

๐Ÿ“œ Citation

@misc{pacore2025,
      title={PaCoRe: Learning to Scale Test-Time Compute with Parallel Coordinated Reasoning}, 
      author={Jingcheng Hu and Yinmin Zhang and Shijie Shang and Xiaobo Yang and Yue Peng and Zhewei Huang and Hebin Zhou and Xin Wu and Jie Cheng and Fanqi Wan and Xiangwen Kong and Chengyuan Yao and Kaiwen Yan and Ailin Huang and Hongyu Zhou and Qi Han and Zheng Ge and Daxin Jiang and Xiangyu Zhang and Heung-Yeung Shum},
      year={2026},
      eprint={2601.05593},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2601.05593}, 
}